![]() predictive image coding device, predictive image coding method, predictive image decoding device and
专利摘要:
image prediction coding arrangement, image prediction coding method, image prediction coding program, image prediction decoding device, image prediction decoding method, and image prediction decoding program. The present invention relates to a coding target region in an image that is partitioned into a plurality of prediction regions. based on prediction information from a neighbor from a neighboring region to a target region, the number of predicted prediction regions in the target region, and previously encoded prediction information from the target region, a motion information candidate to be used in generating a predicted signal from the target prediction region as a next prediction region is selected from previously encoded motion information from regions neighboring the target prediction region. according to the number of motion information candidates selected, melt block information to indicate predicted signal generation from the target prediction region that uses the selected candidate for motion information and motion information detected by the fusion media or prediction information, or one of the melt block information or motion information is encoded. In addition, motion information to be used in generating the predicted signal from the target prediction region is stored in the prediction information storage means. 公开号:BR112013001351B1 申请号:R112013001351-6 申请日:2011-07-14 公开日:2019-01-29 发明作者:Yoshinori Suzuki;Junya TAKIUE;Choong Seng Boon;Thiow Keng Tan 申请人:Ntt Docomo, Inc.; IPC主号:
专利说明:
PREDICTIVE IMAGE DECODING DEVICE, PREDICTIVE IMAGE DECODING METHOD, PREDICTIVE IMAGE DECODING DEVICE AND PREDICTIVE IMAGE DECODING METHOD FIELD OF TECHNIQUE One aspect of the present invention relates to a predictive image coding device, a predictive image coding method, and a predictive image coding program. Another aspect of the present invention relates to a predictive image decoding device, a predictive image decoding method, and a predictive image decoding program. In particular, these aspects refer to a predictive image coding device, a predictive image coding method, a predictive image coding program, a predictive image decoding device, a predictive image decoding method , and a predictive image decoding program for generating a predicted signal from a target block using motion information of surrounding blocks. Still another aspect of the present invention relates to a video coding device, a video coding method, a video coding program, a video decoding device, a video decoding method, and a video decoding program for generating a motion compensated predicted signal by a motion vector. BACKGROUND ART Compression encoding technologies are used for efficient transmission and storage of still images and video data. MPEG-1 to 4 and ITU (International Telecommunication Union) systems H.261 to H.264 are commonly used for video data. In these coding systems, an image serving as a coding target is divided into a plurality of blocks and then a coding process or a decoding process is performed. In intraframe predictive coding, a predicted signal is generated using a previously reconstructed neighbor image signal (i.e., a signal reconstructed from compressed image data) present in the same frame as a target block and then encoded. a differential signal obtained by subtracting the predicted signal from a target block signal. In inter-frame predictive coding, motion compensation is performed with reference to a previously reconstructed neighbor image signal present in a different frame than a target block to generate a predicted signal, and a difference signal obtained by subtracting the predicted signal. predicted signal of a target block signal. For example, H.264 intraframe predictive coding employs a method for generating the predicted signal by extrapolating previously reconstructed pixel values neighboring a block serving as a coding target in a predetermined direction. Figure 22 is a schematic diagram that serves to explain the intraframe prediction method used in ITU H.264. In (A) of Figure 22, target block 802 is a block that serves as a coding target, and a pixel group 801 consisting of Pa-Pl pixels neighboring a boundary of target block 802 is a neighboring region, which It consists of an image signal previously reconstructed in past processing. In the case shown in (A) of Figure 22, the predicted signal is generated by extending downward the pixel group 801 which includes neighboring pixels located just above the target block 802. In the case shown in (B) of Figure 22, the The predicted signal is generated by extending to the right the previously reconstructed pixels (Pi-Pl) located to the left of target block 804. Specific methods for generating the predicted signal, for example, in Patent Literature 1, are described. if a difference between each of the nine predicted signals generated by the methods shown in (A) to (I) of Figure 22 in the manner described above, and the target block pixel signal, and the smallest predicted signal is selected as a great predicted signal. As described above, the predicted signal can be generated by pixel extrapolation. The foregoing contents are described in Patent Literature 1 below. In a typical interframe predictive coding, the predicted signal for a block serving as a coding target is generated by a method of fetching pre-reconstructed images for a signal similar to a pixel signal from the target block. A motion vector consisting of an amount of spatial displacement between the target block and a region composed of the detected signal, and a residual signal between the target block pixel signal and the predicted signal are then encoded. The technique of searching for a motion vector for each block as described above is referred to as block matching. Figure 21 is a schematic diagram for explaining a block matching process. The following describes a procedure for generating a predicted signal for an example target block 702 in an encoding target image 701. Image 703 is a previously reconstructed image and region 704 is a region spatially located at the same position as the block. 702. In block matching, a search range 705 around region 704 is adjusted and a region 706 to minimize the sum of absolute errors of the target block 702 pixel signal is detected from a pixel signal of this range Search The signal from this region 706 is determined to be a predicted signal, and an amount of shift from region 704 to region 706 is detected as a motion vector 707. A method for preparing a plurality of images is also employed. 703, select a reference image to be used in block matching for each target block, and detect reference image selection information. In H.264, a plurality of prediction types of different block sizes are prepared for motion vector coding for the purpose of adapting local image resource changes. The prediction types of H.264 are described, for example, in Patent Literature 2. In video data compression encoding, an order of encoding images (frames or fields) may be optional. For this reason, there are three types of coding order techniques in inter-frame prediction to generate the predicted signal with reference to previously reconstructed images. The first technique is the progressive prediction to generate the predicted signal with reference to a previously reconstructed past image in a reproduction order, the second technique is the regressive prediction to generate the predicted signal with reference to a future image previously reconstructed in the reproduction order. , and the third technique is bidirectional prediction to perform both progressive and regressive prediction and to average the two predicted signals. The types of interframe predictions are described, for example, in Patent Literature 3. In HEVC (high-efficiency video coding) under standardization as a next generation video coding system, the introduction of asymmetric divisions as shown in (E) through (F) of Figure 20 is also under consideration, in addition to rectangular bisections. shown in (B) and (C) of Figure 20 and the square section shown in (D) of Figure 20 as division types of a prediction block. In HEVC, an additional technique under analysis is to use motion information (the motion vector, reference image information, and interframe prediction mode to identify progressive / regressive / bidirectional prediction) of a block adjacent to a target prediction block that serves as a prediction target when generating the predicted signal of the prediction block divided in this manner. This prediction technique is called block fusion and is characterized by allowing efficient encoding of motion information. (A) of Figure 2 is a drawing schematically illustrating neighboring blocks in fusion blocks with prediction block T1 generated by a vertical division of coding block 400. Predicted signal of prediction block T1 is generated using 1) neighbor block movement information A, 2) neighbor block movement information B, or 3) motion information detected in block matching. For example, when an encoder selects movement information from neighboring block A, the encoder first sets merge_flag indicative of the use of neighboring block movement information to "merge_flag = 1" and transmits merge identification (merge_flag) to a deco-difficult. Second on, the encoder sets the merge block selection information (merge_flag_left) indicative of using neighboring block A between neighboring block A and neighboring block B, to "merge_flag_left = 1" and transmits the block selection information merge_flag_left) to the decoder. The decoder, which receives both pieces of information, can identify that the predicted signal from the target prediction block must be generated using the motion information from neighboring block A. Similarly, when the decoder receives "merge_flag = 1" "e" mer-ge_flag_left = O "(selection of neighboring block B), this can identify that the predicted signal of the target prediction block must be generated using the movement information of neighboring block B; when it receives "merge_flag = O," it can identify that it should still receive motion information from the encoder, and restore motion information from the target prediction block. The block fusion described herein is described in Non-Patent Literature 1. In inter-frame prediction in patterns such as MPEG-1, 2, and MPEG-4, each image is divided into a set of rectangular blocks without an overlap between them and a motion vector is associated with each of the blocks. The motion vector is that obtained by motion search for each block and represents a horizontal offset and a vertical offset of a current block from a second block used to predict the image signal of the current block. Patent Literature 4 below describes a method for making a motion compensation prediction with greater accuracy in situations where there is a limit of motion in an oblique direction in a block. This method further serves to divide a block into non-rectangular subpartitions and perform a motion compensated prediction for each subpartition. Patent Literature 5 below describes a method of further dividing a block into small rectangular subpartitions and making a motion compensation prediction for each subpartition. In this method, for encoding a motion vector of a processing target subpartition, a predicted motion vector is generated from a motion vector of a block being in contact with the processing target subpartition and located anteriorly. in a sub-partition processing order, and a difference between the processing target subpartition motion vector and the predicted motion vector, that is, only one differential motion vector is encoded. In this method, if the processing target subpartition has no contact with an earlier block in processing order, the predicted motion vector of the processing target subpartition is generated from a motion vector of another previous subpartition in processing order. in the block that includes the processing target subpartition. Citation List Patent Literature Patent Literature 1: US Patent No. 6,765,964 Patent Literature 2: US Patent No. 7,003,035 Patent Literature 3: US Patent No. 6,259,739 Patent Literature 4: Patent Application Japanese Open for Public Inspection No. 2005-277968 Patent Literature 5: Patent Application Japanese Open for Public Inspection No. 2009-246972 Non-Patent Literature Non-Patent Literature 1: Test Model under consideration, Joint Coliaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO / IEC JTCIISC29 / WG11, 1st Meeting: Dresden, DE, April 15-23, 2010, Document: JCTVC-A205 INVENTION SUMMARY Problem of the Art In Non-Patent Literature 1 above, motion information candidates to be used in block fusion of a plurality of prediction blocks resulting from splitting a target coding block that serves as a coding target are selected by the same method, regardless of the prediction blocks and adjacent situations. For this reason, for example, as shown in Figure (B) of Figure 2, candidates for motion information in prediction block prediction signal generation T2 include the prediction block motion information T1 in the same coding block. The prediction block division type consisting of a prediction block T1 and a prediction block T2 is prepared under the assumption that the predicted signals of the two blocks are generated using different parts of motion information. Therefore, it is unfavorable that prediction block motion information T1 is included in candidates for prediction block motion information T2. That is, it may result in ineffective coding. Therefore, an object of the present invention, in some aspects, is to provide a predictive image coding device, a predictive image coding method, a predictive image coding program, a predictive image decoding device, a predictive image decoding method, and a predictive image decoding program in which motion information candidates to be used in generating the predicted signal of the target prediction block are selected based on previously coded or decoded prediction information (information prediction block division and motion type) of the target coding block and surrounding coding blocks to suppress the occurrence of inefficient coding. That is, in these aspects, the object of the present invention is to provide a predictive image coding device, a predictive image coding method, a predictive image coding program, a predictive image decoding device, a decoding method predictive imaging, and a predictive image decoding program capable of achieving an improvement in coding efficiency. Additionally, there are methods for making motion compensation prediction for each of the subpartitions obtained by dividing a processing target block as described in Patent Literature 4 or Patent Literature 5. In this motion compensation prediction, it is preferable generate the predicted motion vector for each subpartition, based on a motion vector from a previous block in the processing order to a target processing subpartition, and encode only the differential motion vector between the subpartition motion vector and the vector of predicted motion, in terms of amount of code. Figure 23 is a drawing which serves to explain motion compensation prediction. As shown in Figure 23, a processing target block P may have a subpartition SP 1 contacting at least one CP block previously in a processing order to block P, and a subpartition SP2 not having contact with block CP. A motion vector V2 of such a subpartition SP2 is encoded as it is, without the use of the predicted motion vector, in the method described in Patent Literature 4. This method is equivalent to a method of adjusting the predicted motion vector to a zero vector. On the other hand, in the method described in Patent Literature 5, a predicted motion vector of subpartition SP2 is generated from motion vector V1 of subpartition SP1 being another subpartition in block P and being anterior in processing order to subpartition SP2 . However, the motion vector of subpartition SP1 and the motion vector of subpartition SP2 are originally considered to be different from each other. Therefore, the method described in Patent Literature 5 may fail to efficiently encode the SP2 subpartition motion vector. Therefore, it is an object of the present invention, in some other aspects, to provide a video coding device, a video coding method, and a video coding program capable of achieving an improvement in coding efficiency and a decoding device. a video decoding method, and a video decoding program corresponding to video encoding. Solution to the Problem The first aspect of the present invention relates to predictive image coding. A predictive image coding device according to the first aspect of the present invention comprises: a region dividing means that divides an input image into a plurality of regions; a prediction information estimation means, which subdivides a target region serving as a coding target resulting from division by the region division means into a plurality of prediction regions, which determines a type of prediction block division that indicates a number and region formats of the prediction regions suitable for the target region, which predicts motion information for acquisition of each of the highly correlated signals to the respective prediction regions from a previously reconstructed signal, and which obtains information The prediction block containing the prediction block division type and motion information; a prediction information encoding means encoding the prediction information associated with the target region; a predicted signal generation means that generates a predicted signal from the target region based on the prediction information associated with the target region; a residual signal generation means that generates a residual signal based on the predicted target region signal and a target region pixel signal; a residual signal coding means encoding the residual signal generated by the residual signal generating means; a residual signal restoration means that decodes the encoded residual signal data to generate a reconstructed residual signal; and a recording medium that adds the predicted signal to the reconstructed residual signal to generate a restored pixel signal from the target region, and which stores the restored pixel signal as the previously reconstructed signal. The prediction information encoding means is configured as follows: the prediction information encoding means has a prediction information storage means that stores predicted prediction information; prediction information encoding means encodes the prediction block division type of the target region and stores the prediction block division type in the prediction information storage medium; based on: prediction information from a neighboring region adjacent to the target region, the number of prediction regions previously encoded in the target region, and the previously encoded prediction information from the target region; The prediction information coding means selects a candidate for motion information to be used in generating a predicted signal from a target prediction region that serves as a next prediction region from the previously coded motion information of a target. region adjacent to the target prediction region; According to the number of candidates for the selected motion information, the prediction information coding means encodes fusion block information indicative of the predicted signal generation of the target prediction region using the selected motion information candidate and motion information detected by the prediction information estimation means, or encode the fusion block information or motion information, and stores the motion information to be used in generating the predicted signal of the target prediction region, in the storage medium of prediction information. A predictive image coding method according to the first aspect of the present invention comprises: a region division step for dividing an input image into a plurality of regions; a prediction information estimation step for subdividing a target region serving as a coding target resulting from division in the region division step into a plurality of prediction regions, determining a type of prediction block division that indicates a number and region formats of the prediction regions suitable for the target region, estimate movement information for acquisition of each of the highly correlated signals to the respective prediction regions from a previously reconstructed signal, and obtain information from prediction containing the prediction block division type and motion information; a prediction information coding step for encoding the prediction information associated with the target region; a predicted signal generation step for generating a predicted signal from the target region based on the prediction information associated with the target region; a residual signal generation step for generating a residual signal based on the predicted target region signal and a target region pixel signal; a residual signal coding step for encoding the residual signal generated in the residual signal generation step; a residual signal restoration step for decoding encoded residual signal data to generate a reconstructed residual signal; and a recording step for adding the predicted signal to the reconstructed residual signal to generate a restored pixel signal from the target region, and storing the restored pixel signal as the previously reconstructed signal. The prediction information coding step is configured as follows: the prediction information coding step comprises coding the prediction block division type of the target region and storing the prediction block division type in the middle prediction information store that stores previously encoded prediction information; The prediction information coding step comprises, based on the prediction information from a neighboring region adjacent to the target region, the number of prediction regions previously encoded in the target region, and the previously encoded prediction information from the target region. , select a candidate for motion information to be used in generating a predicted signal from a target prediction region that serves as a next prediction region from the previously encoded motion information from a region adjacent to the prediction region. target; The prediction information coding step comprises, according to the number of candidates for selected motion information, encoding fusion block information indicative of the predicted signal generation of the target prediction region using the selected candidate for information motion. motion and motion information detected in the prediction information estimation step, or encode the fusion block information or motion information, and store the motion information to be used for generating the predicted region signal target in the storage medium of prediction information. A predictive image coding program in accordance with the first aspect of the present invention causes a computer to function as each means of the predictive image coding device described above. According to the first aspect of the present invention, the candidate for motion information to be used in generating the predicted signal of the target prediction block is selected based on the previously coded prediction information (motion information and split type). prediction block) of the target coding block and adjacent coding blocks which suppress the occurrence of ineffective coding. In one embodiment, based on the number of predicted prediction regions in the target region, the prediction block division type of the target region, and the prediction block division type of the adjacent region adjacent to the target region, The candidate for motion information from the target prediction region serving as the next prediction region can be selected from the previously coded motion information from the region adjacent to the target prediction region. In one embodiment, based on the number of predicted prediction regions in the target region and the prediction block division type of the target region, the candidate for motion information of the target prediction region serves as the next region. Prediction can be selected from previously encoded motion information from the region adjacent to the target prediction region; when the target region is divided into two prediction regions and when the target prediction region is a prediction region to be coded second in the target region, the movement information of a region that is adjacent to the prediction region Target data that is not included in the target region can be selected as the candidate for motion information to be used in generating the predicted signal from the target prediction region. In one embodiment, based on the number of predicted prediction regions in the target region, the prediction block division type of the target region, previously encoded motion information in the target region, and the region's motion information neighbor adjacent to the target region; the candidate for motion information to be used in generating the predicted signal of the target prediction region serving as the next prediction region may be selected from the previously encoded motion information of the region adjacent to the target prediction region; when the target region is divided into two prediction regions, when the target prediction region is a prediction region to be coded second in the target region, and when the movement information of the prediction region coded first in the region target are equal to the movement information of a region that is adjacent to the target prediction region and is not included in the target region, it can be determined that the movement information of the region adjacent to the target prediction region is not used to generate the predicted signal from the target prediction region, and motion information can be encoded. The second aspect of the present invention relates to predictive image decoding. A predictive image decoding device according to the second aspect of the present invention comprises: a data analysis means which extracts from compressed data an image resulting from division into a plurality of regions and encodes: encoded data from prediction information to indicate a prediction method to be used for predicting a signal from a target region serving as a decoding target, encoded data from a predicted signal from the target region, and encoded data from a residual signal ; a prediction information decoding means that decodes the prediction information encoded data to restore a prediction block division type that indicates a number and region formats of the prediction regions that are subdivided regions of the target region, and information movement for acquiring each of the predicted signals from the respective prediction regions from a previously reconstructed signal; a predicted signal generation means that generates the predicted signal from the target region based on the prediction information associated with the target region; a residual signal restoration means restoring a reconstructed residual signal from the target region from the encoded residual signal data; and a recording medium that adds the predicted signal to the reconstructed residual signal to restore a pixel signal from the target region, and which stores the pixel signal as the previously reconstructed signal. The prediction information decoding means is configured as follows; the prediction information decoding means has a prediction information storage means that stores predoded prediction information; the prediction information decoding means decodes the prediction block division type of the target region and stores the prediction block division type in the prediction information storage medium; Based on the prediction information from a neighboring region adjacent to the target region, the number of prediction regions previously decoded in the target region, and the previously decoded prediction information from the target region, the prediction information decoding medium selects a motion information candidate for use in generating a predicted signal from a target prediction region as a next prediction region from previously decoded motion information from a region adjacent to the target prediction region; According to the number of candidates for the selected motion information, the prediction information decoding means decodes the fusion block information indicative of the predicted signal generation of the target prediction region using the selected candidate for the information. motion and motion information, or decode the melt block information or motion information, and store the motion information to be used for generating the predicted signal from the target prediction region, and in the information storage medium. of prediction. A predictive image decoding method according to the second aspect of the present invention comprises: a data analysis step for extracting from compressed data an image resulting from division into a plurality of regions and encoding: information encoded data prediction to indicate a prediction method to be used in predicting a signal from a target region serving as a decoding target, the encoded data from a predicted signal from the target region, and the encoded data from a residual signal; a prediction information decoding step for decoding the prediction information encoded data to restore a prediction block split type that indicates a number and region formats of the prediction regions that are subdivided regions of the target region, and the movement information for acquiring each of the predicted signals from the respective prediction regions from a previously reconstructed signal; a predicted signal generation step for generating the predicted signal from the target region based on the prediction information associated with the target region; a residual signal restoration step for restoring a reconstructed residual signal from the target region from the encoded residual signal data; and a recording step for adding the predicted signal to the reconstructed residual signal to restore a pixel signal from the target region, and storing the pixel signal as the previously reconstructed signal. The prediction information decoding step is configured as follows: the prediction information decoding step comprises decoding the prediction block division type of the target region and storing the prediction block division type as the prediction information. previously decoded prediction in the prediction information storage medium that stores the previously decoded prediction information; The prediction information decoding step comprises based on: the prediction information from a neighboring region adjacent to the target region, the number of prediction regions previously decoded in the target region, and the previously decoded prediction information from the target region ; select a candidate for motion information to be used to generate a predicted signal from a target prediction region that serves as the next prediction region from previously decoded motion information from a region adjacent to the target prediction region ; and the prediction information decoding step comprises, according to the number of candidates for selected motion information, to decode the fusion block information indicative of predicted signal generation of the target prediction region using the selected candidate for motion information and motion information, or decode fusion block information or motion information, and store motion information to be used for generating the predicted signal from the target prediction region in the information storage medium of prediction. A predictive image decoding program in accordance with the second aspect of the present invention causes a computer to function as each means of the predictive image decoding device described above. According to the second aspect of the present invention, an image may be decoded from the compressed data generated by the above-mentioned predictive image coding. In one embodiment, based on the number of prediction regions previously decoded in the target region, the prediction block division type of the target region, and the prediction block division type of the adjacent region adjacent to the target region, The candidate for motion information from the target prediction region that serves as the next prediction region can be selected from previously decoded motion information from the region adjacent to the target prediction region. In one embodiment, based on the number of predicted prediction regions in the target region and the prediction block division type of the target region, the candidate for motion information to be used in generating the predicted prediction region signal Target serving as the next prediction region can be selected from previously decoded motion information of the region adjacent to the target prediction region; when the target region is divided into two prediction regions and when the target prediction region is a prediction region to be decoded second in the target region, the movement information of a region that is adjacent to the prediction region that is not included in the target region can be selected as the candidate for the target prediction region's movement information. In one embodiment, based on the number of previously decoded prediction regions in the target region, the prediction block division type of the target region, the previously decoded motion region in the target region, and the region motion information adjacent to the target region, you can select the candidate for the motion information to be used to generate the predicted signal from the target prediction region that serves as the next prediction region from the previously decoded motion information of the target. region adjacent to the target prediction region; when the target region is divided into two prediction regions, when the target prediction region is a prediction region to be decoded second in the target region, and when the movement information of the predicted decoding region first in the target region are equal to the movement information of a region that is adjacent to the target prediction region and is not included in the target region; It can be determined that motion information from the region adjacent to the target prediction region is not used in generating the predicted signal from the target prediction region, and motion information can be decoded. The third aspect of the present invention relates to video coding. A video coding device according to the third aspect comprises a division means, a subpartition generation means, a motion detection means, a predicted signal generation means, a motion prediction means, a differential motion vector generation, residual signal generation means, addition means, storage medium, and coding means. The dividing means divides an input image into a video sequence into a plurality of partitions. The subpartition generating means partitions a processing target partition generated by the division medium into a plurality of subpartitions and generates format information for specifying subpartition formats. The motion detection means detects a motion vector of the processing target partition. The predicted signal generation means generates a predicted signal from the processing target partition from a previously reconstructed image signal using the motion vector detected by the motion detection means. The motion prediction means generates a predicted motion vector of the processing target partition, based on the format information generated by the subpartition generation means and a motion vector of a previously processed partial region. The previously processed partial region can be a previous partition or subpartition in a processing order to the processing target partition. Differential motion vector generation means generates a differential motion vector based on a difference between the motion vector used in generating the predicted signal from the processing target partition and the predicted motion vector. Residual signal generation means generates a residual signal based on a difference between the predicted signal and a pixel signal from the processing target partition. The addition means adds the residual signal to the predicted signal to generate a reconstructed image signal. The storage medium stores the reconstructed image signal as a previously reconstructed image signal. The coding means encodes: the residual signal generated by the residual signal generating means, the differential motion vector generated by the differential vector generating means, and the format information generated by the subpartition generating means; to generate compressed data. When a processing target subpartition in the processing target partition has no contact with a previous partition in processing order to the processing target subpartition, the motion prediction means generates a predicted motion vector of the processing target subpartition, based on a motion vector of a previously processed partial region belonging to a domain that also contains the processing target subpartition or other domain. This and another domain may be divided by a line extending a boundary between the processing target subpartition and another subpartition on the processing target partition. A video encoding method according to the third aspect comprises: (a) a division step for dividing an input image from a video sequence into a plurality of partitions; (b) a subpartition generation step for partitioning a processing target partition generated at the division step into a plurality of subpartitions and generating format information for specifying subpartition formats; (c) a motion detection step for detecting a motion vector of the processing target partition; (d) a predicted signal generation step for generating a predicted signal from the processing target partition from a previously reconstructed image signal using the motion vector detected in the motion detection step; (e) a motion prediction step for generating a predicted motion vector of the processing target partition based on the format information generated in the subpartition generation step and a motion vector of a previously processed partial region as a partition or a previous subpartition in a processing order to the processing target partition; (f) a differential motion vector generation step for generating a differential motion vector based on a difference between the motion vector used in generating the predicted processing target partition signal and the predicted motion vector; (g) a residual signal generation step for generating a residual signal based on a difference between the predicted signal and a pixel signal from the processing target partition; (h) an addition step for adding the residual signal to the predicted signal to generate a reconstructed image signal; (i) a storage step for storing the reconstructed image signal as a previously reconstructed image signal; and (j) an encoding step for encoding: the residual signal generated in the residual signal generation step, the differential motion vector generated in the differential motion vector generation step, and the format information generated in the residual signal generation step. subpartition; to generate compressed data. When a processing target subpartition in the processing target partition has no contact with an earlier partition in processing order to the processing target subpartition, the motion prediction step comprises generating a predicted motion vector of the processing target subpartition. , based on a motion vector of a previously processed partial region that belongs to a domain containing the processing target subpartition or other domain. This and another domain may be divided by a line extending a boundary between the processing target subpartition and another subpartition on the processing target partition. A video coding program according to the third aspect causes a computer to function as each means of the video coding device described above. The domain that includes a subpartition not having a previous partition in processing order, between the two domains defined by the aforementioned boundary extension line, is very likely to have a movement similar to a subpartition movement. Therefore, according to the third aspect, the accuracy of the predicted motion vector improves, the value of the differential motion vector becomes smaller, and the motion vector is encoded with a smaller code amount. Therefore, the coding efficiency is improved. The fourth aspect of the present invention relates to video decoding. A video decoding device according to the fourth aspect comprises a decoding means, a movement prediction means, a vector addition means, a predicted signal generation means, an addition means, and a storage medium. . The decoding means decodes compressed data to generate a reconstructed residual signal from a processing target partition in an image, a differential motion vector of the processing target partition, and format information to specify the formats of a plurality of subpara- on the processing target partition. The motion prediction means generates a predicted motion vector of the processing target partition based on the format information and a previously processed partial region motion vector consisting of a previous partition or subpartition in an order of processing to the processing target partition. The vector addition means adds the predicted motion vector generated by the motion prediction means to the differential motion vector generated by the decoding means to generate a motion vector of the processing target partition. The predicted signal generation means generates a predicted signal from the processing target partition from a pre-reconstructed image signal based on the motion vector of the processing target partition. The addition means adds the predicted signal to the reconstructed residual signal generated by the decoding means to generate a reconstructed image signal. The storage medium stores the reconstructed image signal as a previously reconstructed image signal. When a processing target subpartition in the processing target partition has no contact with an earlier partition in processing order to the processing target subpartition, the motion prediction means generates a predicted motion vector of the processing target subpartition, based on a motion vector of a previously processed partial region belonging to a domain containing the processing target subpartition or other domain. This and another domain may be divided by a line extending a boundary between the processing target subpartition and another subpartition on the processing target partition. A video decoding method according to the fourth aspect is a method for decoding compressed data to generate a video sequence, comprising: (a) a decoding step for decoding the compressed data to generate a reconstructed residual signal from a partition processing target in an image, a differential motion vector of the processing target partition, and format information for specifying the formats of a plurality of subpartitions in the processing target partition; (b) a motion prediction step for generating a predicted motion vector of the processing target partition based on the format information and a motion vector of a previously processed partial region as a previous partition or subpartition in an order from processing to processing target partition; (c) a vector addition step for adding the predicted motion vector generated in the motion prediction step to the differential motion vector generated in the decoding step to generate a motion vector of the processing target partition; (d) a predicted signal generation step for generating a predicted signal from the processing target partition from a previously reconstructed image signal based on the motion vector of the processing target partition; (e) an addition step for adding the predicted signal to the reconstructed residual signal generated in the decoding step to generate a reconstructed image signal; and (f) a storage step for storing the reconstructed image signal as a previously reconstructed image signal. When a processing target subpartition in the processing target partition has no contact with an earlier partition in processing order to the processing target subpartition, the motion prediction step comprises generating a predicted motion vector of the processing target subpartition. , based on a motion vector of a previously processed partial region belonging to a domain containing the processing target subpartition or other domain. The domain is another domain being divided by an extension line of a boundary between the processing target subpartition and another subpartition on the processing target partition. A video decoding program according to the fourth aspect causes a computer to function as each means of the video decoding device described above. According to the fourth aspect, the predicted motion vector of the subpartition is generated from a motion vector previously decoded in the domain containing the subpartition that has no contact with an earlier partition in order of processing. This predicted motion vector is likely to be similar to the subpartition motion vector. According to the above embodiments, therefore, the accuracy of the predicted motion vector is improved, the value of the differential motion vector becomes smaller, and it becomes possible to perform decoding from the compressed data with a smaller bit quantity. Therefore, efficient decoding is obtained. Advantageous Effects of the Invention The predictive image coding device, the predictive image coding method, the i-image predictive coding program, the predictive image decoding device, the predictive image decoding method, and the image prediction program. Predictive image decoding according to some aspects of the present invention provides the effect of more efficient encoding of motion information because the motion information candidate for use in generating the predicted signal of the target prediction block can be selected based on surrounding information previously encoded or decoded. Some other aspects of the present invention provide for the video coding device, the video coding method, and the video coding program capable of enhancing the coding efficiency. Additionally, there is provided the video decoding device, the video decoding method, and the video decoding program corresponding to the previous video encoding. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram showing a predictive image coding device according to one embodiment. Figure 2 is a schematic diagram for explaining candidates for motion information in conventional block fusion. Figure 3 is a schematic diagram for explaining candidates for block fusion motion information according to one embodiment. Figure 4 is a flowchart for illustrating a predictive information encoder processing procedure shown in Figure 1. Figure 5 is a flowchart showing a procedure of a predictive image coding method of the predictive image coding device shown. in Figure 1. Figure 6 is a block diagram showing a predictive image decoding device according to one embodiment. Figure 7 is a flowchart for illustrating a prediction information decoder processing procedure shown in Figure 6. Figure 8 is a flowchart showing a procedure of a predictive image decoding method of the predictive image decoding device shown Figure 6 is a first schematic drawing which serves to explain processes using motion information from a plurality of neighboring blocks adjacent a target prediction block, such as motion information from the target prediction block. Figure 10 is a second schematic drawing which serves to explain processes using motion information from a plurality of neighboring blocks adjacent a target prediction block, such as motion information from the target prediction block. Figure 11 is a flowchart for illustrating a process using motion information from a plurality of neighboring blocks adjacent a target prediction block, such as motion information from the target prediction block. Figure 12 is a third schematic drawing which serves to explain processes using motion information from a plurality of neighboring blocks adjacent a target prediction block, such as motion information from the target prediction block. Figure 13 is a second example of a flowchart for illustrating a process using motion information from a plurality of neighboring blocks adjacent a target prediction block, such as motion information from the target prediction block. Figure 14 is a second example of a schematic diagram for explaining candidates for block fusion motion information according to one embodiment. Figure 15 is a third example of a schematic diagram for explaining candidates for block fusion motion information according to one embodiment. Figure 16 is a block diagram showing a program capable of executing the predictive image coding method according to one embodiment. Figure 17 is a block diagram showing a program capable of executing the predictive image decoding method according to one embodiment. Figure 18 is a drawing showing a hardware configuration of a computer for running a program recorded on a recording medium. Figure 19 is a perspective view of a computer for executing a program stored on a recording medium. Figure 20 is a schematic diagram for explaining prediction block division types of coding blocks. Figure 21 is a schematic diagram of a motion estimation process (A) and a model matching process (B) in inter-frame prediction. Figure 22 is a schematic diagram that serves to explain the conventional intraframe prediction method. Figure 23 is a drawing which serves to explain motion compensated prediction. Figure 24 is a drawing schematically showing a configuration of a video coding device according to one embodiment. Figure 25 is a drawing that serves to explain the generation of subpartitions. Figure 26 is a drawing showing a configuration of a motion predictor in one embodiment. Figure 27 is a flowchart of a video encoding method according to one embodiment. Figure 28 is a flowchart showing a motion predictor process according to one embodiment. Figure 29 is a drawing showing an example of subpartitions of a target partition and surrounding partial regions. Figure 30 is a drawing showing another example of subpartitions of a target block and surrounding partial regions. Figure 31 is a drawing showing still further examples of subpartitions of a target block and surrounding partial regions. Figure 32 is a drawing showing yet another example of subpartitions of a target block and a surrounding partial region. Figure 33 is a drawing showing yet another example of subpartitions of a target block and surrounding partial regions. Figure 34 is a drawing schematically showing a configuration of a video decoding device according to one embodiment. Figure 35 is a drawing showing a configuration of a motion predictor according to one embodiment. Figure 36 is a flow chart of a video decoding method according to one embodiment. Figure 37 is a flowchart showing a motion predictor process according to one embodiment. Figure 38 is a drawing showing a configuration of a video coding program according to one embodiment. Figure 39 is a drawing showing a configuration of a video decoding program according to one embodiment. DESCRIPTION OF MODALITIES A variety of embodiments are described below in detail with reference to the accompanying drawings. In the description, identical drawings or equivalent elements will be denoted by the same reference signs, without redundant description. Figure 1 is a block diagram showing a predictive image coding device 100 according to one embodiment. This predictive image coding device 100 is provided with an input terminal 101, a block divider 102, a predicted signal generator 103, a frame memory 104, a subtractor 105, a transformer 106, a quantizer 107, a quantizer 108, an inverse transformer 109, an adder 110, a quantized transform coefficient encoder 111, an output terminal 112, a prediction block division type selector 113, a motion information estimator 114, a memory of prediction information 115, and a prediction information encoder 116. Transformer 106, quantizer 107, and quantized transform coefficient encoder 111 act as a residual signal coding means and inverse quantizer and inverse transformer function as a residual signal restoration means. Prediction block division type selector 113 and motion information estimator 114 act as a means of predicting information estimation, and forecast information memory 115 and forecast information encoder 116 function as a means of prediction information coding. Input terminal 101 is a terminal that accepts input of a signal from a video sequence consisting of a plurality of images. Block splitter 102 divides an image serving as a coding target represented by a signal input from input terminal 101 into a plurality of regions (coding blocks). In the present embodiment, the encoding target image is divided into blocks consisting of 16x16 pixels, however, the image may be divided into blocks of any other size or shape. Additionally, blocks of different sizes can be mixed into one frame. Prediction block split type selector 113 splits a target region (target coding block) that serves as a coding target into prediction regions to be subjected to a prediction process. For example, it selects one from (A) to (H) in Figure 20 for each coding block and subdivides the coding block according to the selected mode. Each split region is referred to as a prediction region (prediction block) and each of the split methods (A) through (H) in Figure 20 is termed as a prediction block split type. An available method of selecting a prediction block division type is, for example, a method for performing each of the signal subdivisions of the target coding block fed through line L102, actually performing prediction processing and coding processing. described below, and select a division type to minimize a rate distortion value calculated from the energy of a coding error signal between the original coding block signal and a reconstructed signal, and the amount of code required to codify the coding block, but not limited to it. The prediction block division type of the target coding block is output via line L 113a, line L113b, and line L113c to prediction information memory 115, motion information estimator 114, and the predicted signal generator 103, respectively. Motion information estimator 114 detects motion information necessary for generating a predicted signal from each prediction block in the target coding block. One applicable method for predicted signal generation (prediction method) consists of an inter-frame prediction and an intra-frame prediction (intra-frame prediction is not shown) as described in the background, but not limited to them. In the present document, it is ensured that motion information is detected by the block matching shown in Figure 21. An original prediction block signal being a prediction target can be generated from the original signal block coding fed through line L 102a and the prediction block division type of the target coding block fed through line L113b. A predicted signal to minimize the sum of absolute errors for the original signal of the target prediction block is detected from the image signals acquired through line L104. In this case, the motion information contains a motion vector, an inter-frame prediction mode (progressive / regressive / bidirectional prediction), a reference frame number, and so on. The detected motion information is output through line L114 to the forecast information memory 115 and the forecast information encoder 116. The forecast information memory 115 stores the input movement information and the prediction block split type. . Predictor information encoder 116 selects motion information candidates to be used in the fusion of blocks of each prediction block, entropy-encodes the prediction information of the target coding block, and outputs the encoded data through line L116 to the terminal 112. An applicable method of entropy coding includes but is not limited to arithmetic coding, variable length coding, and the like. The prediction information contains block fusion information for performing block fusion using block movement information neighboring the prediction block, in addition to the prediction block division type of the target coding block and the motion information of the prediction block. prediction block. The process of predictive information encoder 116 will be described below. Predicted signal generator 103 acquires pre-reconstructed signals from frame memory 104 based on the motion information of each prediction block in the target coding block fed through line L114 and the prediction block split type fed through line L113c, and generates a predicted signal from each prediction block in the target coding block. The predicted signal generated on the predicted signal generator 103 is output through line L103 to subtractor 105 and adder 110. Subtractor 105 subtracts the predicted signal to the target coding block fed through line L103 from the pixel signal of the block. target coding fed through line L 102b after division by block divider 102 to generate a residual signal. Subtractor 105 outputs the residual signal obtained by subtracting through line L105 to transformer 106. Transformer 106 is a part that performs a discrete cosine transform on the input residual signal. The quantizer 107 is a part that quantizes the transform coefficients obtained by the discrete cosine transform by the transformer 106. The quantized transform coefficient encoder 111 entropy encodes the quantized transform coefficients obtained by the quantizer 107. The encoded data is output through the line L 111 to output terminal 112. An applicable method of entropy coding includes but is not limited to arithmetic coding, variable length coding, and so on. The output terminal 112 outputs the information portions from the forecast information encoder 116 and the quantized transform coefficient encoder 111 together with the outside. Inverse quantizer 108 performs an inverse quantization on quantized transform coefficients. Inverse transformer 109 performs a discrete inverse cosine transform to restore a residual signal. Adder 110 adds the restored residual signal to the predicted signal fed through line L103 to reconstruct a signal from the target coding block, and stores the reconstructed signal in frame memory 104. The present embodiment employs transformer 106 and inverse transformer 109 However, another transform process can be employed instead of these transformers. Additionally, transformer 106 and inverse transformer 109 are not always essential. Thus, for use in generating the predicted signal of the subsequent target coding block, the reconstructed signal of the coded target coding block is restored by the reverse process to be stored in frame memory 104. Next, the signal coding process forecast information 116 will be described. Predictor information encoder 116 first selects motion information candidates to be used in merging blocks from each prediction block (motion information candidates to be used in generating a predicted signal from a target prediction region) to from the movement information of blocks neighboring the target prediction block. Block fusion refers to the generation of the predicted signal from the target prediction block using neighboring block movement information. Next, the prediction information encoder 116 compares the motion information detected by the motion information estimator 114 with the motion information candidates thus selected to determine whether block fusion should be performed. Then, according to the number of motion information candidates to be used in block fusion, and the block fusion execution applicability, the prediction information encoder 116 entropy encodes the block fusion information or motion information, next to the prediction block split type. Block fusion information includes mergeflag information to indicate whether the predicted signal of the target prediction block is to be generated using motion information from a neighboring block, that is, whether block fusion is to be generated. merge block selection information (merge_flag_left) to indicate which to use between the motion information portions of two or more blocks neighboring the target prediction block to generate the predicted predictor block signal- target. If there is no motion information candidate to be used for block fusion of each prediction block, there is no need to encode these two pieces of information, that is, fusion identification information and block selection information. Fusion. If there is a motion information candidate, the fusion indentification information is encoded; if two or more motion information candidates exist and if block fusion is performed, the two pieces of information, that is, the fusion identification information pieces and the fusion block selection information are encoded. Even with two or more motion information candidates, there is no need to encode fusion block selection information if block fusion is not performed. Figure 3 is a schematic diagram which serves to explain a process of selecting candidates for motion information to be used in merging blocks of a prediction block according to one embodiment. Figure 3 shows an example of a prediction block division type for vertically traversing the coding block (or dividing the coding block into right and left blocks), as in the case of block 301 shown in (B) of Figure 20 This block 301 is described below as an example, but the same description also applies to blocks 302, 304, 305, 306, and 307. Selecting a candidate for motion information is based on the information below. . 1) The number of prediction blocks already coded / already decoded Np target coding block 2) The prediction block division type of the target coding block 3) The prediction block division type of a neighbor neighboring the coding block target prediction 4) Motion information of prediction blocks already coded / already decoded in the target coding block 5) Motion information and prediction mode (intra-frame prediction / inter-frame prediction) from neighboring block to target prediction block In the example of Figure 3, a motion information candidate to be used in block fusion is selected using the information parts 1), 2), 4), and 5). First, it was found from the information in 2) that the total number of prediction blocks already coded / already decoded in target coding block 400 is 2, prediction blocks T1 and T2, and that the coding block is vertically divided into two subpartitions. It has been found from the information in 1) that the next prediction block is prediction block T1 or prediction block T2. When the next prediction block is prediction block T1 (or when the number of prediction blocks already coded / already decoded in the target coding block is 0), the motion information portions of neighboring block A and neighboring block B are set as candidates for motion information for block fusion (arrows in the drawing indicate that motion information parts of neighboring blocks A and B are motion information candidates for use in generating predicted signal from prediction block T1 ). On this occasion, if neighboring block A or B is an intra-frame prediction block or an out-of-image block, block motion information may be excluded from the candidates for block merge motion information (it is also possible to adjust the motion to a pseudo default value (for example, the motion vector is set to 0 and the reference frame number to 0). If the motion information portions of the two neighboring blocks A and B are identical to each other, the motion information of a neighboring block may be excluded from the candidates. When the next prediction block is prediction block T2 (or when the number of prediction blocks already coded / already decoded in the target coding block is 1), as shown in (A) of Figure 3, the motion information of the neighboring block T1 are excluded from candidates for motion information for block merging. This is because the target coding block is originally divided into two blocks assuming that the predicted signals of prediction block T1 and prediction block T2 are generated from different parts of motion information. That is, it serves to avoid such a situation where the motion information of the prediction block T2 becomes equal to the motion information of the prediction block T1. Since this process provides only part of the block merge motion information of the prediction block T2, the costs for encoding fusion block selection information can be reduced (an arrow in the drawing indicates that the block motion information is neighbor D are applicable for predicted signal generation from prediction block T2). Additionally, based on information pieces 4) and 5) above, the motion information of the prediction block T1 is compared to the motion information of neighboring block D and whether these motion information parts of the prediction block T1 and the neighboring block D are identical to each other, the movement information of neighboring block D is also excluded from the candidates for block merging motion information as shown in (B) of Figure 3. The reason for this is that the predicted signal of the block The prediction block T2 is generated using the motion information of neighboring block D, the motion information portions of the prediction blocks T1 and T2 become identical to each other. Due to this process, the block merging motion information of the prediction block T2 becomes null and may reduce the costs for encoding fusion identification information and fusion block selection information. Figure 4 is a flowchart of prediction information encoder 116 for substantiating the process of Figure 3. First, the prediction information encoder 116 encodes the prediction block division type of the target coding block and stores the prediction block division type in the prediction information memory 115. At the same time, the prediction information encoder Prediction 116 sets the number N of prediction blocks in the target coding block, based on the coded prediction block division type, and resets a target prediction block number i to 0 (step S151). Next, the prediction information coder 116 determines whether a target prediction block is a prediction block to be coded last in the target coding block and whether the number of prediction blocks in the target coding block is not less than 2 (step S152). For example, in the case of N = 2, the decision is positive with i = 1, and processing proceeds to step S157. In the case of N = 4 ((D) of Figure 20), the decision becomes positive with i = 3. When the decision is negative, processing proceeds to step S153. In the case of Figure 3, processing proceeds to step S153 when the target prediction block is prediction block T1; processing proceeds to step S157 when the target prediction block is prediction block T2. In step S153, the fusion identification information is encoded. Fusion identification information becomes positive (merge_flag = 1, target prediction block predicted signal generation using a motion information candidate) if the target prediction block motion information matches a candidate for motion information for block fusion; otherwise fusion identification information becomes negative (merge_flag = 0, predicted signal generation from target prediction block using encoded motion information). Then, when the motion information of the target prediction block matches a candidate for motion information for block merging, processing proceeds to step S164. At step S164, the prediction information encoder 116 determines if there are two motion information candidates, and if the number of motion information candidates is 2, it encodes the fusion block selection information and processing proceeds. to step S155. When the number of candidates for motion information is 1, processing proceeds to step S165. On the other hand, if the motion information of the target prediction block does not match any candidate for block fusion motion information, processing proceeds to step S156 and the prediction information encoder 116 encodes the detected motion information by motion information estimator 114, and then proceeds to step S165. At step S157, the prediction information encoder 116 determines whether each already encoded motion information portion of the target coding block coincides with the motion information of a neighboring block that does not belong to the target coding block. The description of this step S157 means that, in the case of N = 2, the motion information of the prediction block T1 shown in Figure 3 is compared to the motion information of neighboring block D. In addition, the description of step S157 means that in In the case of N = 4 ((D) in Figure 20), the target prediction block is the lower right split block and the movement information portions of the other three prediction blocks (upper left, upper right, and lower left). are compared to each other. When the determination is positive (or when the pieces of motion information in comparison are coincident), the number of motion information candidates to use in the fusion block of the target prediction block is 0, as shown in the example. (B) of Figure 3, and therefore, the prediction information encoder 116 encodes the motion information detected by the motion information estimator 114 without transmitting the block fusion information, and then processing proceeds to step S165 (step S160). On the other hand, when the determination is negative (or when the movement information parts in comparison do not match), processing proceeds to step S163. In the case of ITT = 4, the motion information portions of the blocks in the upper right and lower left target coding block are those of the blocks neighboring the target prediction block. For this reason, applying block fusion to the target (lower right) prediction block with matching movement information of the three prediction blocks (upper left, upper right, and lower left) means that the predicted signals of the four blocks The prediction signals in the target coding block are all generated using the same motion information. For this reason, where N = 4 and where the motion information portions of the three prediction blocks (upper left, upper right, and lower left) are identical to each other, the number of motion block candidates for Target prediction (lower right) is set to 0. At step S163, the prediction information encoder 116 determines whether the prediction block division type of the target coding block is a bisection type, and if the determination is negative, processing proceeds to step S153 (the description in following parts will be omitted). When the determination in step S163 is positive, processing proceeds to step S158, in which prediction information encoder 116 encodes the fusion identification information. In this case, Since the number of motion information candidates to be used in the fusion block of the target prediction block is 1, as in the example of (A) in Figure 3, there is no need to encode the motion information. fusion block selection. Then, if the motion information of the target prediction block matches the candidate for motion information for block merging, processing proceeds to step S165. When the motion information of the target prediction block does not match the candidate for block fusion motion information, processing proceeds to step S160, in which the forecast information encoder 116 encodes the motion information detected by the estimator. movement information 114, and then processing proceeds to step S 165. At step S165, the movement information of the target block is stored in the prediction information memory 115. Subsequently, at step S161, the prediction information encoder 116 determines whether encoding is complete for all prediction blocks in the block. target coding (if i = N-1); when i = N-1, this prediction information coding processing of the target coding block is terminated; when i <N-1, the number i is updated at step S162 (i = i + 1), and processing returns to step S152 to perform motion information encoding processing of the next prediction block. Since candidates for motion information to be used in predictive block block fusion can be selected in advance using the information pieces below as described above, it becomes possible to efficiently transmit the block fusion information. 1) The number of prediction blocks already coded / already decoded in the target coding block 2) The prediction block division type of the target coding block 4) The movement information of the predicted blocks already coded / already decoded in the block 5) The movement information and prediction mode (intra-frame prediction / inter-frame prediction) of the neighboring block to the target prediction block. Figure 5 is a flowchart showing a procedure of a predictive image coding method in the predictive image coding device 100 according to the present embodiment. First, block splitter 102 splits an input image into 16x16 coding blocks (the image can be split into blocks of another size or shape, or blocks of different sizes can be mixed in the frame). Then, the prediction block division type selector 113 and motion information estimator 114 determine the prediction block division type of the target coding block that serves as a coding target and the motion information of each other. of the prediction blocks (step S101). Next, the prediction information encoder 116 encodes the prediction information (step S102, Figure 4). Next, the predicted signal generator 103 generates the predicted signal of the target coding block, based on the prediction block division type of the target coding block and the motion information of each of the prediction blocks, and a signal The residual value indicative of a difference between a pixel signal of the target coding block and the predicted signal is transformed and coded by transformer 106, quantizer 107, and quantized transform coefficient encoder 111 (step S103). The coded prediction information data and quantized transform coefficients are then output through output terminal 112 (step S104). For predictive coding of the subsequent target coding block, the coded residual signal is decoded by the inverse quantizer 108 and the inverse transformer 109 after or parallel to these processes. Then, adder 110 adds the decoded residual signal to the predicted signal to reconstruct a signal from the target coding block. The reconstructed signal is stored as a reference image in frame memory 104 (step S105). If processing is not complete for all target coding blocks, processing returns to step S101 to perform processing for the next target coding block. When processing is complete for all target coding blocks, processing is terminated (step S106). Next, predictive image decoding according to one embodiment is described. Figure 6 is a block diagram showing a predictive image decoding device 200 according to one embodiment. This predictive image decoding device 200 is provided with an input terminal 201, data analyzer 202, inverse quantizer 203, inverse transformer 204, adder 205, output terminal 206, quantized transform coefficient decoder 207, data information decoder 208, frame memory 104, predicted signal generator 103, and predictive information memory 115. Inverse quantizer 203, inverse transformer 204, and quantized transform coefficient decoder 207 function as a residual signal decoding means . The decoding means by inverse quantizer 203 and inverse transformer 204 may be implemented using any means other than these. Additionally, inverse transformer 204 may be excluded. The prediction information memory 115 and the prediction information decoder 208 function as a means of decoding prediction information. Input terminal 201 accepts compressed data input resulting from compression encoding by the above-mentioned predictive image encoding method. This compressed data contains the quantized transform coefficient information resulting from the transformation, quantization, and entropy coding of the residual signal for each of a plurality of split coding blocks, and coded data of the prediction information for signal generation. predicted blocks. Prediction information contains block fusion information for performing block fusion using motion information as candidates for block fusion, as well as prediction block division type of the target coding block and motion information. of prediction blocks. Additionally, the motion information contains the motion vector, inter-frame prediction mode (forward / reverse / bidirectional prediction), frame reference number, and so on. The data analyzer 202 analyzes the compressed data entered through the input terminal 201, separates data about the target coding block that serves as a decoding target into quantized transform coefficient coded data and prediction information coded data, and outputs them via line L202a and line L202b to the quantized transform coefficient decoder 207 and the prediction information decoder 208, respectively. The prediction information decoder 208 selects a motion information candidate to be used in the fusion block of each prediction block and entropy decodes the prediction information encoded data associated with the target coding block. The decoded prediction information is output via line L208a and line L208b to the predicted signal generator 103 and the prediction information memory 115, respectively. Forecast information memory 115 stores the input prediction information. The processing of prediction information decoder 208 will be described below. Predictive signal generator 103 acquires pre-reconstructed signals from frame memory 104 based on the target coding block prediction information fed through line L208a, and generates a predicted signal from each prediction block in the coding block. target. The predicted signal thus generated is output via line L103 to adder 205. The quantized transform coefficient decoder 207 entropy decodes the quantized transform coefficient data of the residual signal in the target coding block and outputs the result through line L207. to inverse quantizer 203. Inverse quantizer 203 performs an inverse quantization of the residual signal information of the target coding block fed through line L207. Inverse transformer 204 performs an inverse discrete cosine transform of inversely quantized data. Adder 205 adds the predicted signal generated by the predicted signal generator 103 to the residual signal restored by inverse quantizer 203 and inverse transformer 204, and outputs a reconstructed pixel signal from the target coding block through line L205 to the output terminal 206 and frame memory 104. Output terminal 206 outputs the signal to the outside of decoder 200 (for example, to a screen). Frame memory 104 stores the reconstructed image mitigated from adder 205 as a reference image that is stored as a reconstructed image for reference to the next decoding processing. Figure 7 is a flowchart of prediction information decoder 208 for implementing the processing of Figure 3. First, the prediction information decoder 208 decodes the prediction block division type of the target coding block and that stored in the prediction information memory 115. At the same time, the prediction information decoder 208 sets the N number of prediction blocks in the target coding block, based on the decoded prediction block division type, and reset the target prediction block number i to 0 (step S251). The prediction information decoder 208 then determines whether a target prediction block is a prediction block to be decoded last in the target coding block and whether the number of prediction blocks in the target coding block is not less than 2 (step S252). For example, in the case of N = 2, the determination is positive with i = 1 and processing proceeds to step S258. In the case of N = 4 ((D) of Figure 20), the determination is positive with i = 3. When the determination is negative, processing proceeds to step S253. In Figure 3, processing proceeds to step S253 when the target prediction block is prediction block T1; processing proceeds to step S258 when the target prediction block is prediction block T2. In step S253, the fusion identification information is decoded. When fusion identification information is positive (merge_flag = 1), fusion identification information indicates that the predicted signal from the target prediction block must be generated using a motion information candidate. On the other hand, when fusion identification information is negative (merge_flag = 0), the predicted signal from the target prediction block is generated using the decoded motion information. In the next step S254, the prediction information decoder 208 determines whether the fusion identification information indicates a motion information decoding, that is, if the merge_flag value is 0. When the mergeflag decoded value is 0, the decoder prediction information 208 decodes the motion information for generating the predicted signal from the target prediction block (step S257) and then processing proceeds to step S267. When the merge flag value is 1, the prediction information decoder 208 determines in step S266 whether the number of motion information candidates to be used in block fusion is 2, and when the number of candidates is equal to 2. 2, the melt block selection information is decoded and processing proceeds to step S256 (step S255). When the number of motion information candidates to be used in the fusion block of the target prediction block is 1, processing proceeds to step S256. In step S256, when the number of motion information candidates is 1, the prediction information decoder 208 determines its motion information as the motion information of the target prediction block. When the number of motion information candidates is 2, the prediction information decoder 208 determines the neighbor block motion information indicated by the fusion block selection information, such as the prediction block motion information. -target. At step S258, the prediction information decoder 208 determines whether each already decoded motion information portion of the target coding block coincides with the motion information of a neighboring block that does not belong to the target coding block. The description of this step S258 means that, in the case of N = 2, the motion information of the prediction block T1 shown in Figure 3 is compared to the motion information of neighboring block D. In addition, the description of this step S258 means that in In the case of N = 4 ((D) in Figure 20), the target prediction block is the lower right split block and the movement information parts of the three other prediction blocks (upper left, upper right, and lower left). are compared to each other. When the determination is positive (or when the pieces of motion information in comparison are coincident), the number of motion information candidates to be used in the fusion block of the target prediction block is 0, as shown in the example. From (B) in Figure 3, the prediction information decoder 208 decodes the motion information to be used for generating the predicted target prediction block signal, without decoding the fusion block information, and processing proceeds to the prediction block. step S267 (step S262). On the other hand, when the determination is negative (or when the movement information parts in comparison do not match), processing proceeds to step S265. In the case of N = 4, the movement information parts of the blocks in the block. upper right and lower left target coding blocks are those of the blocks neighboring the target prediction block. For this reason, applying block fusion to the target prediction block (lower right) when the coincidence of the motion information portions of the three prediction blocks (upper left, upper right, and lower left) means that the predicted signals of the four prediction blocks in the target coding block are all generated from the same motion information. For this reason, where N = 4 and where the motion information portions of the three prediction blocks (upper left, upper right, and lower left) are identical to each other, the number of motion block candidates for Target prediction (lower right) is set to 0. At step S265, the prediction information decoder 208 determines whether the prediction block split type of the target coding block is a bisection type, and if the determination is negative, processing proceeds to step S253 (the description in following parts will be omitted). When the determination in step S265 is positive, processing proceeds to step S259, in which prediction information decoder 208 decodes the fusion identification information. In this case, as in the example of (A) of Figure 3, the number of motion information candidates to be used in the fusion block of the target prediction block is 1, so there is no need to decode the fusion block selection information. In the next step S260, prediction information decoder 208 determines whether fusion identification information indicates motion information decoding, that is, if the mer-ge_flag value equals 0. When the merge flag decoder value is 0, the prediction information decoder 208 decodes the motion information for generating the predicted signal from the target prediction block (step S262) and processing proceeds to step S267. When the merge_flag value is 1, processing proceeds to step S261. In step S261, since the number of motion information candidates is 1, as shown in (A) of Figure 3, the prediction information decoder 208 determines the motion information of neighboring block D as the motion information of the target prediction block and processing proceeds to step S267. At step S267, the movement information of the restored prediction block is stored in the forecast information memory 115. Subsequently, at step S263, the prediction information decoder 208 determines whether decoding is complete for all prediction blocks. in the target coding block (if i = N-1); when i = N-1, this prediction information decoding processing of the target coding block is terminated; when i <N-1, the number i is updated at step S264 (i = i + 1) and processing returns to step S252 to perform motion information decoding processing of the next prediction block. The following describes a predictive image decoding method in the predictive image decoding device 200 shown in Figure 6 using Figure 8. First, the compressed data is input through input terminal 201 (step S201). The data analyzer 202 then performs an analysis of the compressed data to extract the coded data from the prediction information and the quantized transform coefficients over a target region of a decoding target. Prediction information is decoded by prediction information decoder 208 (S203). Subsequently, based on the prediction information restored, the predicted signal generator 103 generates the predicted signal from the target coding block (S204). The quantized transform coefficients decoded by the quantized transform coefficient decoder 207 are subjected to inverse quantization in inverse quantizer 203 and inverse transformation in inverse transformer 204 to generate a reconstructed residual signal (S205). Then, the generated predicted signal is added to the reconstructed residual signal to generate a reconstructed signal, and this reconstructed signal is stored in frame memory 104 for reconstruction of the next target encoding the coded block (step S206). If upcoming compressed data exists, processes S204 through S206 are repeatedly performed (S207) to process all data to the last. Examples are described above where the number of blocks neighboring the prediction block is not greater than 2, and below, attention is given to situations where the number of neighboring blocks in contact with upper and lower block boundaries to a prediction block is not less than 3. The example in Figure 3 refers to the case where there are two neighboring blocks in contact with a prediction block, but there are situations where a prediction block is in contact with two or more neighboring blocks, depending on the combinations of prediction block division types of a coding block and neighboring blocks. Figure 9 shows an example where three neighboring blocks are in contact with a prediction block. Block 301 in Figure 20 will be described as an example, but the same description also applies to blocks 302, 304, 305, 306, and 307. In (A) and (B) of Figure 9, a target coding block 400 has two prediction blocks resulting from the vertical bisection of block 400, while a block 401 in contact with the left side of prediction block T1 is horizontally bisected ( or divided into two upper and lower blocks). For this reason, the prediction block T1 is in contact with three neighboring blocks A, B, and C. In this case, when it is preliminarily determined on the coding side and on the decoding side that neighboring blocks are represented by two neighboring blocks. A and B in contact with the upper left corner of the target prediction block, the number of neighboring blocks is always limited to 2 and therefore the technique described above is applicable. On the other hand, it is also possible to employ a virtually horizontal bisecting technique of prediction block T1 according to the prediction block division type of neighboring block 401, as shown in (B) of Figure 9. In this case, the target prediction block T1 is divided into blocks T1aeT1b and the predicted signal of block T1a and the predicted signal of T1b are generated using two pieces of motion information belonging to neighboring blocks A and C, respectively. At this time, the fusion block selection information can be efficiently encoded without changing the configuration of the fusion block information, such that the candidates selected for fusion block selection information are two parts of the block movement information. neighbor B in (A) of Figure 9 and the combination of the motion information portions of neighboring blocks A and C in (B) of Figure 9. On the other hand, in the case where (A) of Figure 9 or (B) of Figure 9 is identified by the fusion block selection information and where (B) of Figure 9 is selected, it is also possible to adopt a method of further transmitting. the second fusion identification information for each virtual block and identify the predicted signal generation of the virtual block based on the motion information of the neighboring block, or encode / decode the motion information. It is also possible to adopt a method without dividing the prediction block T1 in which the candidates selected for the fusion block selection information in the prediction block T1 are three pieces of motion information from neighboring blocks A, B, and C and in which motion information to use to generate the predicted T1 signal is selected from the three pieces of information, however, the following changes are required in this case. 1. A stream of "acquire neighbor block prediction block division type and derive number of neighbor blocks from prediction block" is added before step S164 in Figure 4 and step S266 in Figure 7. 2. A Step S164 in Figure 4 and Step S266 in Figure 7 are changed to "Are there two or more pieces of motion information from selected candidates " 3. Fusion block selection information is extended to information to select one of three or more candidates. This block fusion processing shown in (A) and (B) of Figure 9 can be implemented by extending step S256 in Figure 7 to the processing shown in Figure 11. First, in step S256a, the block division type of prediction of a coding block in contact with the target prediction block is acquired. In the next step S256b, the number of neighboring block prediction block boundaries M indicated by the decoded fusion block selection information is derived from the acquired prediction block division type. For example, in the case of (B) of Figure 9, M = 2. Additionally, in step S256c, it is determined whether the value of M is greater than 1 (M> 1). In the case of M> 1, the target prediction block is divided into M virtual blocks, and the motion information portions of M neighboring blocks are set to M divided virtual blocks (it may also be contemplated that fusion are additionally sent to each virtual block and it is determined whether motion information should be decoded). In the case of M = 1, the motion information of a neighboring block that serves as a block merge candidate is adjusted to the motion information of the target prediction block. According to Figures 7 and 11 as described above, the candidate selection for motion information in the example of Figure 9 is performed based on the information parts below. 1) The number of predicted / already decoded prediction blocks in the target coding block 2) The prediction block division type of the target coding block 3) The prediction block division type of the neighboring prediction block Thus, the information in 3), which is not used in the candidate selection for motion information in the example in Figure 3, is used in cases where there are three or more motion information candidates. (C) of Figure 9 shows an example in which the neighboring block on the left side of the prediction block 400 is asymmetrically bisected. In this case, it is also possible to adopt the technique of virtually bisecting prediction block T1 according to the prediction block division type of neighboring block 401 (in T1a and T1b blocks). That is, the predicted signal of target prediction block T1 can be generated using a combination of neighboring block motion information pieces A and C in (C) of Figure 9 as candidates for block fusion motion information of the prediction block T1. In cases where the prediction block division type of the coding block is a type in which the prediction block number is 1 as the block 300 in Figure 20, as shown in (D) to (F) of Figure 9, it is also possible to apply the technique of virtually horizontally dividing prediction block T1 (block 400) according to neighboring block prediction block division type 401 (division into a plurality of blocks arranged in the vertical direction), and generate the predicted signal for each block. In addition, in cases where neighboring block 402 is vertically divided (into a plurality of blocks arranged in the horizontal direction), which are not shown, it is possible to apply a virtually vertically dividing prediction block T1 (block 400) method. according to the prediction block division type of neighboring block 402 and generate the predicted signal for each block. In cases where a block neighboring the prediction block includes an intra-frame (intra) predicted block, it is also possible to apply the technique of virtually splitting the prediction block and generating the predicted signal by first determining the rules. (A) through (F) of Figure 10 show examples where an intraframe (intra) predicted block is included in a plurality of neighboring blocks A, C, E, and G in contact with the left side of the prediction block. Based on the prediction block division type of the neighboring block and the prediction mode (inter-frame / intra-frame prediction) on the prediction information, the intra-frame predicted block in the neighboring block is virtually integrated with a motion-intercrete predicted block (lines thick in the drawing). In these examples, an intra-frame predicted block is virtually integrated with an inter-frame predicted block that is closest to the upper left corner of the neighboring block and which is closest to the intra-frame block. As a consequence, the prediction block T1 is virtually divided according to the number of predicted inter-frame blocks in the neighboring block, as shown in (A) to (F) of Figure 10. Thus, even in cases where the neighboring block includes a intraframe predicted block, the prediction of the fusion predicted signal can be performed using movement information of the predicted interframe block in the neighboring block. There are no restrictions on the rules of integrating the intra frame predicted block with the inter frame predicted block in the neighboring block. It may be contemplated that a plurality of rules as described above are prepared and a rule selected for each frame or slice to implement an encoding. In this case, the selection of a candidate for motion information is based on parts of the information below. 1) The number of predicted / already decoded prediction blocks in the target coding block 2) The prediction block division type of the target coding block 3) The prediction block division type of the neighboring prediction block 5) The prediction mode (intra-frame prediction / inter-frame prediction) of the neighboring block to the target prediction block Figure 12 shows examples where coding block 400 and neighboring block 402 are similarly vertically bisected, but their formats of division are different. In these examples, prediction block T1 (block including blocks T1a and T1b) in (A) of Figure 12 and prediction block T2 (block including blocks T2a and T2b) in (B) of Figure 12 also have three neighboring blocks. For T1 in (A) of Figure 12, the processing flow of Figure 11 is applied to step S256 in Figure 7, thereby making it possible to perform block fusion by adjusting motion information portions of blocks Ba and Bb to the respective blocks T1a and T1b resulting from the virtual vertical bisection of the prediction block T1. For T2 in (B) of Figure 12, a processing flow of Figure 13 described below is applied to step S261 in Figure 7, thereby making it possible to perform block fusion by adjusting portions of motion information. from blocks Ba and Bb to respective blocks T2a and T2b resulting from the virtual vertical bisection of prediction block T2. On this occasion, it is also possible to adopt the method of transmitting second fusion identification information to each virtual block and to identify the predicted signal generation of the virtual block based on the motion information of the neighboring block or to encode / decode the motion information. . It is also possible to adopt a method in which the prediction block T2 is not divided, two pieces of motion information from block Ba and block Bb are defined as motion information candidates to be used in fusion blocks from prediction block T2. , and one of the motion information portions of block Ba and block Bb is selected as motion information to be used for generating the predicted T2 signal, but in this case, it is necessary to extend the flow of Figure 7 as described below. 1. A stream of "acquire neighbor block prediction block division type and derive number of neighbor blocks from prediction block" is added after step S158 in Figure 4 and after step S259 in Figure 7. 2. Step S159 in Figure 4 and Step S260 in Figure 7 change to "Are there two or more parts of selected candidate movement information " 3. The step of performing block selection information encoding / decoding is added after step S159 in Figure 4 and after step S260 in Figure 7. The flow of Figure 13 will be described below. In Figure 13, first in step S261a, the prediction block division type of the coding block in contact with the target prediction block is acquired. In the next step S261b, the number of neighboring block boundaries of prediction blocks in contact with the neighboring block not belonging to the target coding block is derived from the acquired prediction block division type. For example, in the case shown in (B) of Figure 12, M = 2. Additionally, it is determined in step S261c whether the value of M is greater than 1 (M> in the case of M> 1, the target prediction block is divided into M virtual blocks and motion information pieces from M neighboring blocks are set to M split virtual blocks (it is also possible to additionally send the fusion identification information to each virtual block and determine if motion information should be decoded.) In the case of M = 1, the motion information of the neighboring block as a block merge candidates are set as the movement information of the target prediction block. According to Figures 12 and 13 as described above, selection of a motion information candidate in the example of Figure 11 is performed based on the information parts below. 1. The number of prediction blocks already coded / already decoded in the target coding block 2. The prediction block division type of the target coding block 3. The prediction block division type of the neighboring prediction block It should be noted that although Figure 11 describes the example of vertical division, the same processing also applies to examples of horizontal division (division into a plurality of blocks arranged in the vertical direction) such as blocks 306 and 307 in Figure 20. Other modifications described below can be adopted. (Candidates for motion information) In the previous description, the motion information parts of the blocks in contact with the top and left side of the prediction block have been defined as block merge candidates, but you can also adjust a limitation. based on the prediction block division types of the target coding block and neighboring blocks, as shown in (A) and (B) of Figure 14 and (A) of Figure 15. (A) and (B) of Figure 14 show examples where there are two neighboring blocks and where the movement information of neighboring blocks on the side contacting two or more neighboring blocks between the upper and the left side of the prediction block is excluded from the block merge candidates. In this case, there is no need to encode the fusion block selection information, which can reduce the additional information. Motion information candidates for use in fusion block prediction block T1 in (A) of Figure 14 and prediction block T1 in (B) of Figure 14 are determined as parts of motion information of block B and block A respectively. (A) of Figure 15 shows a technique of automatically selecting motion information candidates for use in block fusion of prediction blocks T1 and T2, based on the prediction block division type of the target coding block. (B) of Figure 15 shows an example in which the prediction block to which block fusion applies is imitated according to the prediction block division type of the target coding block and the number of blocks already coded / already decoded into the target coding block. In the example shown in Figure 3, when the motion information of block T1 coincides with that of block D, the motion information of block D is excluded from motion information candidates to be used in block fusion of block T2; whereas, in the case shown in (A) of Figure 15, without comparison between T1 block motion information and D block motion information, block D is excluded from block merge candidates based on the number of blocks already decoded / decoded in the target coding block. In this way, the prediction block to which block fusion applies can be limited by the number of motion vectors to be encoded in the target coding block. Additionally, it is also possible to impose a limitation according to the block sizes of the two neighboring blocks in contact with the upper left corner of the prediction block and the block size of the prediction block. For example, when the size of the right side of the neighboring block in contact with the left side of the target prediction block is smaller than a predefined size (for example, half or a quarter of the left side length of the prediction block), the neighbor block motion information can be deleted from the target prediction block block merge candidates ... When the limitation is adjusted to candidates for motion information as described above, the amount of block fusion information code may be reduced. (Candidate Selection for Motion Information) Candidate selection for motion information is based on parts of the information below, however, one method of using the information is not limited to the methods described above. Means for selecting motion information candidates using these pieces of information can be implemented by the configurations of Figure 1 and Figure 6. 1) The number of prediction blocks already encoded / already decoded in the target coding block 2) The prediction block division type of the target coding block 3) The prediction block division type of the neighboring block to the target prediction block 4) The movement information of the predicted / already decoded prediction blocks in the 5) Motion information and prediction mode (intra-frame prediction / inter-frame prediction) of the neighboring block to the target prediction block (Prediction block coding) In the description described above, the coding / decoding of the prediction blocks The prediction in the coding block is performed in a scan-by-scan order. For motion information to be used in block fusion is also applicable in cases where the prediction blocks are coded / decoded in any order. For example, in the example of Figure 3, where the prediction block T2 of target coding block 400 is first encoded / decoded, the motion vector of prediction block T2 is not included as a candidate for motion information to be used in the prediction block. merge blocks from prediction block T1. (Block Format) In the description above, the partial regions in the coding block are always rectangular, but can have any format. In this case, format information can be included in the prediction information of the coding block. (Transformer and Inverse Transformer) The residual signal transformation process can be performed at a fixed block size, or the transformation process can be performed after a target region is subdivided according to the partial regions. (Prediction Information) In the description described above, the predicted signal generation method has been described as inter-frame prediction (prediction using motion vector and frame reference information), but the predicted signal generation method is not limits to this. The foregoing predicted signal generation process is also applicable to intraframe prediction and the prediction method that includes luminance compensation, or the like. In this case, the prediction information contains mode information, luminance compensation parameters, and so on. In Figure 10, the intra-frame predicted block in the neighboring block is virtually integrated with the inter-frame predicted block, but it is also possible to adopt a method in which the prediction block is virtually divided, regardless of the prediction mode of the neighboring block, and the partial signals. in the prediction block are predicted through intra frame prediction. (Color Signal) The preceding description contains no particular description of the color format, however, the predicted signal generation process can also be performed for the color signal or color difference signal, regardless of the luminance signal. The predicted signal generation process can also be performed in sync with the luminance signal process. (Block Noise Removal Process) Although the above description is not established, the reconstructed image may be subjected to a block noise removal process, in which case the boundary portion noise removal process is preferable. of partial regions. In cases where the prediction block is virtually divided in the examples shown in Figures 9, 10 and 12, the block noise removal process may also be applied to a boundary between the virtually divided blocks. The predictive image coding method and the predictive image decoding method according to the embodiments of the present invention may also be provided stored as programs on a recording medium. Examples of recording media include recording media such as floppy disks (trademarks), CD-ROMs, DVDs or ROMs, or semiconductor memories, or the like. Figure 16 is a block diagram showing the modules of a program that can perform the predictive image coding method. The predictive image coding program P100 is provided with the block division module P101, motion information estimation module P102, predicted signal generation module P103, storage module P104, subtraction module P105, transform module P106, quantization module P107, inverse quantization module P108, inverse transform module P109, addition module P110, quantized transform coefficient coding module P111, prediction division type selection module P112, prediction information storage module P113 and P114 prediction information coding module. The functions implemented by executing the respective modules via a computer are the same as the functions of the predictive image coding device 100 mentioned above. That is, the P101 block division module, motion information estimation module P102, predicted signal generation module P103, storage module P104, subtraction module P105 transform module P106, quantization module P107, quantization module inverse P108, inverse transform module P109, addition module P110, quantized transform coefficient coding module P111, prediction division type selection module P112, prediction information storage module P113, and information coding module P114 causes the computer to perform the same functions as block divider 102, motion information estimator 114, predicted signal generator 103, frame memory 104, subtractor 105, transformer 106, quantizer 107, inverse quantizer 108, inverse transformer 109, adder 110, quantized transform coefficient encoder 111, prediction block division type selector 113, prediction information memory 115, and prediction information encoder 116, respectively. Figure 17 is a block diagram showing modules of a program that can perform the predictive image decoding method. The predictive image decoding program P200 is provided with the quantized transformation coefficient decoding module P201, prediction information decoding module P202, prediction information storage module P113, inverse quantization module P206, inverse transform module P207 , P208 add-on module, P103 predicted signal generation module and P104 storage module. The functions implemented by executing the respective modules are the same as those of the respective components of the predictive image decoding device 200 mentioned above. That is, the quantized transform coefficient decoding module P201, prediction information decoding module P202, prediction information storage module P113, inverse quantization module P206, inverse transform module P207, addition module P208, modulus P103 Predictive Signal Generation Module and P104 Storage Module cause the computer to perform the same functions as the quantized transform coefficient decoder 207, prediction information decoder 208, prediction information memory 115, inverse quantizer 203, inverse transformer 204, adder 205, predicted signal generator 103, and frame memory 104, respectively. The P100 predictive image coding program or the P200 predictive image decoding program configured as described above is stored on an SM recording medium and executed by the computer described below. Figure 18 is a drawing showing a computer hardware configuration for executing programs recorded on the recording medium and Figure 19 is a perspective view of the computer for executing programs stored on the recording medium. Equipment for executing programs stored on the recording medium is not limited to computers, but may be a DVD player, a signal decoder, a mobile phone, or the like, provided with a CPU and configured to perform processing and control based on. in software. As shown in Figure 19, the C10 computer is provided with a C12 reading device, such as a floppy disk drive, a CD-ROM drive, or a DVD drive, a C14 working memory (RAM) in which a The operating system is resident, a memory C6 for storing programs stored on the SM recording medium, a C18 monitor device such as a screen, a mouse C20 and a keyboard C22 as input devices, a communication device C24 for data transmission and reception and others, and a C26 CPU to control program execution. When the SM recording medium is placed on the C12 reader device, the C10 computer becomes accessible to the predictive image encoding / decoding program stored on the SM recording medium via the C12 reader device and becomes capable of operating as the image coding device or the image decoding device according to the embodiment of the present invention based on the image coding and decoding program. As shown in Figure 18, the predictive image coding program and the image decoding program may be provided in the form of a CW computer data signal overlaid on a carrier wave over a network. In this case, the C10 computer becomes capable of executing the predictive image coding program or the predictive image decoding program after the predictive image coding program or the image decoding program received by the C24 communication device. stored in memory C16. Still another embodiment will be described below. Figure 24 is a drawing schematically showing the configuration of a video coding device according to one embodiment. Video coding device 10 shown in Figure 24 is provided with block splitter 501, subpartition generator 502, frame memory 503, motion detector 504, predicted signal generator 505, motion predictor 506, subtractor 507, residual signal generator 508, transformer 509, quantizer 510, inverse quantizer 511, inverse transformer 512, adder 513 and entropy encoder 514. An input image signal (video signal) fed into this video coding device 10 comprises: a sequence of frame unit image signals (hereinafter referred to as frame image signals). Block splitter 501 sequentially selects frame image signals, or input images, which serve as coding targets for the input image signal fed through line L501. Block splitter 501 splits an input image into a plurality of partitions or blocks. Block splitter 501 sequentially selects the plurality of blocks as coding target blocks and outputs a pixel signal from each of the target blocks (hereinafter referred to as a target block signal) through line L502. In video coding device 10, the coding processing described below is performed in block units. Block splitter 501 can divide, for example, an input image into a plurality of blocks, each 8x8 pixels. However, blocks can be any size or shape. Blocks can be, for example, 32 x 16 pixel blocks or blocks consisting of 16 x 64 pixels. Subpartition generator 502 divides a target block fed across line L502 into a plurality of subpartitions. Figure 25 is a drawing explaining the generation of subpartitions. As shown in Figure 25, subpartition generator 502 divides target block P into two subpartitions SP1 and SP2 through a straight line Ln expressed by the linear expression of formula (1). y = mx + k (1) For example, subpartition generator 502 can be configured as follows: with changes to the mek parameters, it gets a predicted signal from subpartition SP1 and a predicted signal from subpartition SP2, and determines mek that minimizes an error between the predicted subpartition SP1 signal and an SP1 subpartition image signal and an error between the SP2 subpartition predicted signal and an SP2 subpartition image signal as straight line parameters Ln. Subpartition generator 502 outputs the mek parameters in formula (1) thus determined as format information specifying the formats of subpartitions in target block P, that is, as format information specifying the formats of the first subpartition SP 1 and the second subpartition SP2, through line L504. The linear expression that expresses the straight line Ln can be any one. For example, the straight line Ln may be one expressed by the formula (2) - y = -x / tan6 + p / sen6 (2) In this case, the format information is Θ and p. Format information can be information indicative of two arbitrary points where the straight line Ln passes, for example, intersections between the straight line and the boundaries of the P block. The block does not always need to be divided by a straight line, but Subpartitions may be generated based on a pattern selected from a plurality of pre-prepared patterns. In this case, information such as an index that specifies the selected pattern can be used as format information. In the description below, the coordinates are adjusted with an origin at the highest leftmost position of the target block, a subpartition that includes the highest leftmost pixel in the target block P is defined as a first subpartition, and the another as a second subpartition. However, it is noted that any method is applicable herein: for example, a subpartition that does not include the center position in the target block may be defined as a first subpartition, and the other as a second subpartition. In this case, the format information may be block boundary intersection information or standard identification information. Frame memory 503 stores pre-reconstructed picture signals fed through line L505, i.e. frame picture signals that have been encoded in the past (which will later be referred to herein as frame reference picture signals). Frame memory 503 outputs the frame reference picture signal through line L506. Motion detector 504 receives the target block signal fed through line L502, the format information of the block fed through line L504, and frame reference signals fed through line L506. Motion detector 504 looks for the image signals in a predetermined range of frame reference image signals for a signal similar to a subpartition image signal that serves as a processing target, and calculates a motion vector. This motion vector is an amount of spatial displacement between a region in a frame reference image signal that has a pixel signal similar to the subpartition image signal that serves as a processing target, and the target block. The motion detector 504 then outputs the motion vector calculated along line L507. Motion detector 504 can be configured at the same time to also detect a motion vector for a target block and determine if a predicted signal will be generated for each of the two subpartitions resulting from division of the target block. This determination may be such a determination that if an error between the predicted target block signal and the target block image signal is less than the errors between the predicted signals of the two subpartitions generated by the target block division, and the image signals of the two subpartitions, the target block is not divided into subpartitions. When this determination is made, information indicating the result of the determination is coded as split applicability information and format information can be coded only if split applicability information indicates that a target block will be divided into subpartitions. Predictive signal generator 505 generates the predicted signal of the subpartition image signal serving as a processing target based on the motion vector fed through line L507 and the block format information fed through line L504 of the signal. in the predetermined range of the frame reference picture signal fed through line L506. Predictive signal generator 505 combines the predicted signals of the respective subpartitions in the target block to generate the predicted signal of the target block. The predicted signal generator 505 outputs the predicted signal thus generated through line L508. The predicted signal can be generated by intra-frame prediction rather than inter-frame prediction. Motion predictor 506 generates a predicted motion vector of a processing target subpartition in a target block, based on block format information fed through line L504, motion vector fed through line L507, and a vector of movement of a block prior to the processing target subpartition or a motion vector of an already processed partial region, that is, a subpartition. The motion predictor 506 outputs the predicted motion vector thus generated across line L509. The motion predictor 506 may select a predicted motion vector out of a plurality of candidates for the predicted motion vector. In this case, motion predictor 506 also outputs indication information specifying the predicted motion vector selected across line L510. If candidates for the predicted motion vector of the processing target subpartition are limited to one according to a predetermined rule shared with the decoder side, the indication information output may be omitted. Subtractor 507 subtracts the predicted motion vector fed through line L509 from the motion vector of the processing target subpartition fed through line L507 to generate a differential motion vector. Subtractor 507 outputs the differential motion vector thus generated across line L511. Residual signal generator 508 subtracts the predicted signal from the target block fed through line L508 from the target block signal fed through line L502 to generate a residual signal. Residual signal generator 508 outputs the residual signal generated in this way through line L512. Transformer 509 performs orthogonal transformation of the residual signal fed through line L512 to generate transformation coefficients. Transformer 509 outputs the transformation coefficients thus generated across line L513. This orthogonal transformation can be performed, for example, by DCT. However, the transformation used by transformer 509 can be any transformation. The quantizer 510 quantizes the α-bound transformation coefficients through line L513 to generate the quantized transformation coefficients. The quantizer 510 outputs the quantized transformation coefficients thus generated across line L514. Inverse quantizer 511 performs the inverse quantization of quantized transformation coefficients fed through line L514 to generate inversely quantized transformation coefficients. Inverse quantizer 511 outputs the inverse quantized transformation coefficients thus generated across line L515. Inverse transformer 512 performs the inverse orthogonal transformation of inversely quantized transformation coefficients fed through line L515 to generate a reconstructed residual signal. Inverse transformer 512 outputs the reconstructed residual signal generated in this way through line L516. The inverse transformation used by inverse transformer 512 is a symmetrical process to the transformation of transformer 509. Transformation is not always essential, and the video coding device does not always need to be equipped with transformer 509 and inverse transformer 512. Also, quantization is not always essential, and the video coding device need not always be provided with quantizer 510 and inverse quantizer 511. Adder 513 adds the reconstructed residual signal inserted through line L516 to the predicted signal of the target block fed through L508 line to generate a reconstructed image signal. Adder 513 outputs the reconstructed image signal as a pre-reconstructed image signal through line L505. The entropy encoder 514 encodes the quantized transformation coefficients fed through the L514 line, the target block format information fed through the L504 line, the predicted motion vector indication information fed through the L510 line, and the motion vector. differential fed through the L511 line. Entropy Encoder 514 multiplexes the codes generated by encoding to generate a compressed stream and then output the compressed stream through line L517. Entropy encoder 514 may use any encoding method, such as arithmetic coding or career coding. Entropy encoder 514 can adaptably determine a probability of occurrence in arithmetic coding of the predicted motion vector indication information fed through line L510, based on the format information of the target block fed through line L504. For example, entropy encoder 514 may set a high value as a probability of indication information occurring indicating a motion vector of a partial region in contact with a processing target subpartition. Figure 26 is a drawing showing a movement predictor configuration according to one embodiment. As shown in Figure 26, the motion predictor 506 has a motion vector memory 5061, a motion reference candidate generator 5062, and a predicted motion vector generator 5063. Motion vector memory 5061 stores the vectors motion of previously processed partial regions and outputs the previously encoded motion vectors across line L5061, for derivation of a predicted motion vector from a processing target subpartition. Motion reference candidate generator 5062 generates candidates for the predicted motion vector of the motion vectors of the partial regions fed through line L5061, by a method described below, based on the format information fed through line L504. The motion reference candidate generator 5062 issues the candidates a predicted motion vector thus generated across line L5062. The predicted motion vector generator 5063 selects a candidate that minimizes the difference of the processing target subpartition motion vector from the candidates a predicted motion vector fed through the L5062 line. Predicted motion vector generator 5063 outputs the selected candidate as a predicted motion vector across line L509. It also issues the referral information that specifies the selected candidate through line L510. If the number of candidates generated in the motion reference candidate generator is limited to one, the issuance of nomination information may be omitted. There are no restrictions on a method for limiting the number of candidates to one, but any method can be applied, for example, a method for using an intermediate value of three candidates, a method for using an average of two. candidates, and a method of defining a priority order for selecting one out of a plurality of candidates. The operation of the video coding device 10 is described below and a video coding method according to one embodiment is also described. Figure 27 is a flowchart of the video encoding method according to one embodiment. In one embodiment, as shown in Figure 27, block splitter 501 first splits an input image into a plurality of blocks at step S501. In the next step S502, subpartition generator 502 splits a target block into a plurality of subpartitions as described above. Subpartition generator 502 also generates the format information as described above. At step S503, motion detector 504 then obtains a motion vector from a processing target subpartition as described above. In the subsequent step S504, the predicted signal generator 505 generates a predicted target block signal, which uses the motion vectors of the respective subpartitions in the target block and the frame reference signals as described above. At step S505, motion predictor 506 then obtains a predicted motion vector. In addition, motion predictor 506 generates indication information specifying a selected candidate from a plurality of candidates a predicted motion vector. The process details of this S505 step will be described later. In the subsequent step S506, subtractor 507 calculates the difference between the motion vector of each subblock and the predicted motion vector to generate a differential motion vector as described above. At step S507, the residual signal generator 508 then obtains the difference between the target block image signal and the predicted signal to generate a residual signal. In the subsequent step S508, transformer 509 performs orthogonal transformation of the residual signal to generate transformation coefficients. In the subsequent step S509, the quantizer 510 quantizes the transformation coefficients to generate quantized transformation coefficients. In the subsequent step 5510, inverse quantizer 511 performs the inverse quantization of quantized transformation coefficients to generate inversely quantized transformation coefficients. In the subsequent step 5511, inverse transformer 512 performs inverse transformation of inversely quantized transformation coefficients to generate a reconstructed residual signal. In step S512, adder 513 then adds the predicted target block signal to the reconstructed residual signal to generate a reconstructed image signal. In the subsequent step 5513, the reconstructed picture signal is stored as a previously reconstructed picture signal in frame memory 503. At step S514, the entropy encoder 514 then encodes the quantized transformation coefficients, the target block format information, the predicted motion vector indication information, and the differential motion vector. In the next step S515, you determine if all blocks have been processed. If processing has not been completed on all blocks, processing from step S502 is continued on an unprocessed block as a target. On the other hand, if processing is completed on all blocks, processing is terminated. The operation of motion predictor 506 will be described below in more detail. Figure 28 is a flow chart showing the motion predictor process according to one embodiment. Motion predictor 506 outputs the predicted motion vector (hereinafter PMV) and indication information specifying PMV, according to the flow chart shown in Figure 28. In the motion predictor 506 process, as shown in Figure 28, the counter value i is first set to 0 in step S505-1. It is assumed below that the process for the first subpartition is performed with i = 0 and the process for the second is performed with i = 1. Next, step S505-2 serves to generate PMV candidates from a processing target subpartition from preprocessed partial region motion vectors according to a method described below. The number of PMV candidates is two in this example. That is, PMV candidates can be adjusted as follows: a preprocessed partial region motion vector to the left of the processing target subpartition and a preprocessed partial region motion vector above the subpartition processing target are set as candidates for predicted motion vector of the processing target subpartition. In step S505-2, the number of candidates generated is adjusted in Ncand. Next, in step S505-3, you determine if NCand is "0." When NCand is "0" (Yes), processing proceeds to step S505-4. When NCand is not "0" (No), processing proceeds to step S505-5. At step S505-4, PMV is set to a zero vector and processing proceeds to step S505-10. On this occasion, PMV may be fitted to a motion vector of a predetermined block, a motion vector of a partial region immediately processed prior to the processing target subpartition, or the like, instead of the zero vector. In step S505-5, you determine if NCand is "1." When NCand is "1" (Yes), processing proceeds to step S505-10. When NCand is not "1" (No), processing proceeds to step S505-6. In step S505-6, a PMV is selected from the PMV candidates generated in step S505-2. The PMV to be selected may be a candidate that minimizes the difference of the motion vector of the processing target subpartition. Next, S505-7 serves to determine if the PMV selected in step S505-6 is a left candidate, that is, a left partial region motion vector. When the PMV selected in step S505-6 is the left candidate (Yes), processing proceeds to step S505-8. When the PNIV selected in step S505-6 is not the left candidate (No), processing proceeds to step S505-9. In step S505-8, indication information pmvjeft flag = 1 that indicates that PMV is the motion vector of the partial region to the left of the processing target subpartition is output. On the other hand, in step S505-9 the indication information pmvjeft flag = 0 indicating that the PMV is the partial region motion vector above the processing target subpartition is output. Then, in step S505-10, the PMV that remains as a candidate is issued. In the subsequent step 5505-11, "1" is added to the counter value i. Next, in step S505-12, you determine if the value of counter i is less than "2". When counter value i is less than "2" (Yes), processing proceeds to step S505-2. When the value of counter i is not less than "2" (No), processing is terminated. If step S505-2 is configured to limit the number of candidates generated to one, steps S505-5, S505-6, S505-7, S505-8, and S505-9 can be omitted. There are no restrictions on this limiting method, however, it is possible to adopt, for example, such a method of using an intermediate value of three candidates, a method of using an average of two candidates, or a method of determining an order of priority for selecting one out of a plurality of candidates, as described earlier in the description of predicted motion vector generator 5063. In the configuration where the number of candidates generated in step S505-2 is limited to one, when NCand is not "0" in step S505-3 (No), processing proceeds to step S505-10. The candidate generation method for the predicted motion vector of the processing target subpartition in step S505-2 will be described below in more detail. Figure 29 is a drawing showing an example of subpartitions of a target block and surrounding partial regions. Motion reference candidate generator 5062, as shown in Figure 29, refers to the partial region U1 and the partial region L1 for the first subpartition SP1, and when each of the partial regions is the one processed by interframe prediction, the motion reference candidate generator 5062 employs the partial region motion vector as a candidate for the predicted motion vector of the first subpartition SP1. Similarly, motion reference candidate generator 5062 refers to the partial region U2 or partial region L2 for the second subpartition to generate predicted motion vector candidates of the second subpartition. The partial regions U1, L1, U2 and L2 herein are blocks or subpartitions around the target block P and regions serving as predicted signal generation units. Partial regions can be blocks prepared for the generation of predicted motion vector candidates (for example, division-generated blocks in a single format), independent of the predicted signal generation units. The partial region U1 is a partial region that includes a neighboring Pi 1 (0, -1) pixel above the highest leftmost pixel F (0,0) of the first SP1 subpartition, which is a partial region previously processed in contact. with subpartition SP1. The partial region L1 is a partial region that includes a neighboring Pi2 (-1.0) pixel to the left of the highest and leftmost pixel F (0.0) of the first SP1 subpartition, which is a partial region in contact with the first subpartition SP1. The partial region U2 is a neighboring partial region to the right of a partial region that includes a pixel Pi3 (x1, -1), which is a partial region in contact with the geometric axis χ. The partial region L2 is a neighboring partial region below a partial region that includes a pixel Pi4 (-1, y1), which is a partial region in contact with the y axis. The x x1 coordinate of the Pi3 pixel and the y y1 coordinate of the Pi4 pixel can be calculated by formula (3) and formula (4). x1 = ceiling (-k / m) (3) y1 = ceiling (k) (4) Formulas (3) and (4) are formulas obtained by applying the ceiling (z) function to the values that result from the substitution of y = 0 and x = 0, respectively, in the linear expression (1) expressing an extension line Ln of a boundary as a partition between the first subpartition SP1 and the second subpartition SP2. The ceiling function (z) is called a ceiling function, which is a function that derives a minimum integer no less than z, for the real number z. A floor function may be employed instead of the ceiling function. The floor function (z) is called a floor function, which is a function that derives a maximum integer no greater than z, for the real number z. In addition, x1 and y1 can be calculated by formulas (5) and (6). x1 = ceiling ((- 1 -k) / m (5) y1 = ceiling (-m + k) (6) Formulas (5) and (6) are formulas obtained by applying the ceiling function (z) to the values resulting from the substitution of y = -1 and x = -1, respectively, in formula (1), If the partial regions U2 and L2 exist, are determined as described below. The conditions for the existence of the partial region U2 are those that are in an image and the formula (7) is satisfied The conditions for the existence of the partial region L2 are those in an image and the formula (8) is satisfied 0 <x1 (7) 0 < y1 (8) When the condition of formula (7) is not met, the partial region L2 exists between the second subpartition SP2 and the partial region U2, in which case the farthest partial region U2 of the second subpartition SP2 is less likely. the partial region L2 closest to the second subpartition SP2 has a motion vector close to that of the second subpartition SP2. The motion vector of the partial region U2 can be excluded from motion vector candidates predicted by the condition of formula (7). Also, when the condition of formula (8) is not met, the partial region U2 exists between the second subpartition SP2 and the partial region L2. In this case, the farthest partial region L2 of the second subpartition SP2 is less likely than the partial region U2 closest to the second subpartition SP2 has a motion vector close to that of the second subpartition SP2. In this case, the motion vector of the partial region U2 can be excluded from motion vector candidates predicted by the condition of formula (8). In one example, the conditions defined by formulas (9) and (10) below may be used instead of the conditions of formulas (7) and (8). Ο <χ1 <blocksizeX (9) 0 <y1 <blocksizeY (10) Here, blocksizeX and blocksizeY are the number of horizontal pixels and the number of vertical pixels in target block P. For example, when target block P is a block of 8 x 8 pixels, blocksizeX = 8 and blocksizeY = 8. Using the condition of formula (9) or formula (10), it is possible to exclude from motion predicted motion vector candidates, a motion vector from a partial region that has no contact with the second subpartition SP2, outside the partial region U2 and the partial region L2. This allows only predicted motion vector candidates with conceivably high prediction accuracy to be left. When the partial regions U1, L1, U2 and L2 are adjusted as described above, the predicted motion vector candidates of each subpartition are generated from the previously processed partial region motion vectors situated on the same side with respect to the extent of boundary between subpartitions. Provided that the SP2 subpartition predicted motion vector candidates are generated from the partial region motion vectors in the same domain as the SP2 subpartition with respect to the Ln extension line of the boundary between the SP2 subpartition and the other subpartitions of the target block. including subpartition SP2, the predicted motion vector generation method is not limited to that in the embodiment described above. For example, partial region U2 may be a partial region including pixel Pi3 and partial region L2 may be a partial region including pixel Pi4. A condition where the entire partial region is present in the same domain as the SP2 subpartition with respect to the Ln line can be added as a condition for the partial region motion vector to be added to the predicted SP2 subpartition motion vector candidates. In this case, it is possible to employ, for example, a method of inspecting the positions of all corners of the partial region. Even if a partial region is not completely included in the same domain as a subpartition with respect to the extension line, the partial region motion vector can be employed as a candidate for the subpartition's predicted motion vector. Figure 30 is a drawing showing another example of subpartitions of a target block and surrounding partial regions. As shown as an example in Figure 30, the motion vectors of the partial regions RA, RB, RG, and RE can be used as candidates for the predicted motion vector of the first subpartition SP1. A predicted motion vector of the RE partial region can be added to the predicted motion vector candidates of the second subpartition SP2. In the description of Figures 28 and 29, the number of motion vectors serving as predicted motion vector candidates was a maximum of two, however, it is also possible to select two of the motion vectors obtained by any of the conditions described above. For example, the motion vector of the partial region U2 shown in Figure 29, and a motion vector of a partial region neighboring the partial region U2 may be selected as a candidate for the predicted motion vector. Likewise, the motion vector of the partial region L2 and a motion vector of a partial region neighboring the partial region U2 can be selected as a candidate for the predicted motion vector. In addition, three or more motion vectors may be selected as predicted motion vector candidates from the motion vectors specified by any of the conditions described above. In addition, an average of a median of a plurality of predicted motion vector candidates can be added to predicted motion vector candidates. Block format information can be used as a method to limit the number of predicted motion vector candidates generated in step S505-2 in Figure 28 to a maximum of one. For example, outside of pre-coded partial regions in contact with a processing target subpartition, a motion vector of a partial region with a maximum length of one portion in contact with the subpartition may be added as a candidate for the predicted motion vector. . It is also possible to employ a motion vector of a previously coded partial region with a minimum shorter distance from a processing target subpartition, as a candidate for the predicted subpartition motion vector. The above described generation methods of predicted motion vector candidates can be applied to subpartitions of any format. Figure 31 is a drawing showing additional examples of subpartitions of a target block and surrounding partial regions. (A) of Figure 31 shows subpartitions defined by a line Ln with an intercept coordinate and y a curve different from those of the line Ln shown in Figure 29. (II) of Figure 31 shows subpartitions defined by a line Ln with a curve approximately symmetrical to that of the line Ln with respect to the geometric axis y with a y-intercept coordinate different from that of the line Ln shown in Figure 29. (C) of Figure 31 shows subpartitions defined by two lines Ln1 and Ln2. (D) of Figure 31 shows subpartitions defined by two intersecting lines Ln1 and Ln2. When the boundary extension line as shown in (A) to (D) of Figure 31 is used as a reference, the partial regions L2 and U2 with motion vectors that may be candidates for predicted motion vector of subpartition SP2 may be specified by the above mentioned generation methods of predicted motion vector candidates. Note that the subpartitions are not limited to those subdivided by a straight line only. For example, in the case where subpartition formats are selected out of predetermined standards, a motion vector of a previously encoded partial region belonging to the same domain as a processing target subpartition with respect to a boundary extension line between subpartitions can be used as a candidate for the predicted motion vector. If subpartition format patterns are preliminarily defined, it is also possible to preliminarily determine a partial region with a motion vector to be adopted as a candidate for the predicted motion vector for each format pattern. Patterns can include patterns for dividing a target block into rectangular subpartitions. The selection method mentioned above of the predicted motion vector may also be applied as a method of selecting a motion vector in generating a predicted signal from a processing target subpartition using motion vectors from previously coded partial regions. That is, the predicted signal of the processing target subpartition can be generated using the predicted motion vector selected in step S505-2 in Figure 28. In this case, there is no need for differential motion vector coding and therefore the vector The predicted motion signal emitted from the motion predictor 506 is not output to the subtractor 507, but to the predicted signal generator 505. Further, the video coding device 10 may be configured to determine whether the differential motion vector is to be coded, and to encode the application information specifying the result of the determination. In this modification, the motion predictor 506 may include a function for switching the output of the predicted motion vector with the subtractor 507 or the predicted signal generator 505 based on the application information. In this modification, it is unfavorable that the motion vectors of all subpartitions in a target block become identical to each other because the division of the target block becomes insignificant. That is, on the occasion of the generation of motion vector candidates from a processing target subpartition in step S505-2 in Figure 28, a motion vector of a previously encoded subpartition on the target block may be excluded from the candidates. For example, where the target block is divided into two subpartitions and where the motion vector of the first subpartitions is encoded first, the motion vector of the first subpartitions is excluded from the predicted motion vector candidates of the second subpartitions. If the motion vector of the first subpartitions is equal to that of the partial region U2, the motion vector of the partial region U2 need not be used to generate the predicted motion vector of the second subpartition. If the differential motion vector to be encoded is indicated, the probability of occurrence in the arithmetic coding of the application information mentioned above can be determined adaptably according to the subpartition format information. For example, the likelihood of application information indicating that the first subpartition differential motion vector is not coded may be set higher than for the application information indicating that the second subpartition differential motion vector is not coded . The following is the reason for this: the second subpartition may not have contact with any previously coded partial region, while the first subpartition always comes in contact with a previously coded partial region; therefore, adjusting o-current probabilities, as described above, can reduce a code amount of application information. The effect of one embodiment will be described with reference to Figure 32 which shows an example of dividing a target block into rectangular subpartitions for the sake of simplicity. In this example, target block P is divided into a left subpartition SP1 and a right subpartition SP2 through a straight line Ln. In this example, a motion vector of the first subpartition SP1 and a motion vector of a partial region RB are candidates for predicted motion vector of the second subpartition SP2. In the example shown in Figure 32, if the second subpartition SP2 predicted signal is generated using the first subpartition SP1 motion vector, the first subpartition SP1 predicted signal and the second subpartition SP2 predicted signal will be generated using the same motion vector , which makes dividing the target block into two subpartitions insignificant. For this reason, the predicted signal of the second subpartition SP2 can be generated using the partial region motion vector RB above the subpartition SP2. In the example shown in Figure 32, therefore, it is preliminarily determined between the encoding device and the decoding device that the predicted signal of the second subpartition SP2 will be generated using the partial region motion vector RB, which reduces motion vector candidates predicted and eliminates the need to transmit indication information indicating a predicted motion vector out of a plurality of predicted motion vector candidates. In addition, a method for the video coding device 10 to determine whether the differential motion vector needs to be coded (wherein the motion predictor 506 switches the output of the predicted motion vector with the subtractor 507 or the predicted signal generator. 505 based on application information) is discussed. At this time, if the motion vector of the partial region RB is equal to that of the first subpartition SP1, selecting one of the two predicted motion vector candidates results in the same predicted motion vector of the second subpartition SP2 as the motion vector of the first subpartition. SP1 subpartition. Therefore, it is preliminarily determined between the coding device and the decoding device that if the two predicted motion vector candidates are identical to each other, the predicted signal of the second subpartition SP2 will be generated by the motion vector resulting from the addition of the vector. differential motion and the predicted motion vector, which eliminates the need to transmit application information that indicates whether the differential motion vector will be coded in addition to the indication information. In cases where a target block is divided into three or more subpartitions, as shown in Figure 33, the division of the target block is significant if the first subpartition SP1, second subpartition SP2, and third subpartition SP3 have the same motion vector and the fourth Subpartition SP4 has only one different motion vector. In such cases, therefore, the predicted signal of the second subpartition SP2 and the predicted signal of the third subpartition SP3 may be generated using the motion vector of the first subpartition SP1, rather than the motion vectors of the partial region RB and the partial region RE, respectively. However, for the fourth subpartition SP4, if the motion vectors of the second subpartition SP2 and the third subpartition SP3 are equal, then two predicted motion vector candidates become identical to each other; therefore, by preliminarily determining a rule between the encoding device and the decoding device, it becomes unnecessary to transmit the indication information indicating a predicted motion vector. In addition, if the first subpartition SP 1, second subpartition, and third subpartition SP3 have the same motion vector and if the predicted signal from the fourth subpartition SP4 is generated using the motion vector from the second subpartition SP2 or the third subpartition then the four subpartitions will have the same motion vector; therefore, by preliminarily determining a rule between the encoding device and the decoding device, it also becomes unnecessary to transmit application information indicating whether the differential motion vector will be encoded in addition to the indication information. A video decoding device according to one embodiment will be described below. Figure 34 is a drawing schematically showing the configuration of the video decoding device according to one embodiment. Video decoding device 20 shown in Figure 34 is a device that can generate a video sequence by decoding a compressed stream generated by video encoding device 10. As shown in Figure 34, video decoding device 20 is provided with data decoder 601, motion predictor 602, adder 603, reverse quantizer 604, reverse transformer 605, frame memory 606, predicted signal generator 607 and adder 608 Data decoder 601 analyzes a compressed stream inserted through line L601. Data decoder 601 sequentially performs the processing described below for each block as a decoding target (hereinafter a target block). Data decoder 601 decodes the encoded data associated with the target block in the compressed stream to restore the quantized transformation coefficients of the target block, and outputs the quantized transformation coefficients across line L602. The data decoder 601 also decodes the encoded data to restore the target block format information, and outputs the format information via line L603. On this occasion, split applicability information that indicates whether the target block needs to be split is restored, and if split applicability information does not indicate any split of the target block, the format information need not be restored. The data decoder 601 also decodes the encoded data to restore the indication information for each subpartition in the target block, i.e. the information indicating one among a plurality of predicted motion vector candidates, and outputs the indication information via the line L604. The data decoder 601 also decodes the encoded data to restore the differential motion vector of the target block, and outputs the differential motion vector across line L605. In addition, data decoder 601 can adaptably determine the likelihood of encoding data decoding occurring when restoring predicted motion vector indication information based on the target block format information. One method of implementing this may be, for example, to set a higher occurrence probability for indication information indicating a motion vector of a partial region in contact with a processing target subpartition, such as a predicted motion vector. Motion predictor 602 generates a predicted motion vector of a processing target subpartition based on the format information fed through the L603 line and the partial region motion vectors previously in a processing order fed through the L606 line. , and based on indication information fed through line L604, and outputs the predicted motion vector through line L607. By limiting predicted motion vector candidates to one in a predetermined method, it is also possible to omit the input of indication information. Adder 603 adds the predicted motion vector fed through line L607 to the differential motion vector fed through line L605 to generate a motion vector of a target block or a motion vector of a subpartition in the target block, and outputs the motion vector through line L606. Inverse quantizer 604 performs the inverse quantization of quantized transformation coefficients fed through line L602 to generate inversely quantized transformation coefficients. Inverse quantizer 604 outputs the inverse quantized transformation coefficients thus generated across line L608. Inverse transformer 605 performs inverse orthogonal transformation of inversely quantized transformation coefficients fed through line L608 to generate a reconstructed residual signal. Inverse transformer 605 outputs the reconstructed residual signal thus generated across line L609. If the generated reconstructed residual signal is not quantized, the video decoding device 20 need not be provided with the inverse quantizer 604. Similarly, if the generated reconstructed residual signal is not a transformer, the decoding device 20 does not need to be provided with inverse transformer 605. Frame memory 606 stores the pre-reconstructed picture signals fed through line L610, i.e. frame picture signals previously in the processing order for the processing target. input (which will be referred to hereafter as frame rate picture signals). In addition, frame memory 606 outputs frame reference picture signals through line L611. Predicted signal generator 607 generates a predicted signal from each subpartition image in the target block, based on motion vector fed through line L606 and format information fed through line L603, from an image signal. within a predetermined range of frame reference picture signals fed through line L611. The predicted signal generator 607 outputs the predicted signal generated in this way through line L612. Although the description is omitted in this descriptive report, the predicted signal can be generated by intraframe prediction in addition to interframe prediction. Adder 608 adds the reconstructed residual signal fed through line L609 to the predicted signal of the target block fed through line L612 to generate a reconstructed picture signal. Adder 608 outputs the reconstructed picture signal through line L610. Figure 35 is a drawing showing a movement predictor configuration according to one embodiment. As shown in Figure 35, motion predictor 602 has a motion vector memory 6021, a motion reference candidate generator 6022, and a predicted motion vector generator 6023. Motion vector memory 6021 stores motion vectors motion fed through the L606 line. Motion vectors stored in motion vector memory 6021 are partial region motion vectors previously processed previously in the processing order for the target block or processing target subpartition. Motion vector memory 6021 outputs motion vectors stored across line L6021 for derivation of the motion vector predicted for the processing target subpartition. Motion reference candidate generator 6022 generates predicted motion vector candidates from motion vectors fed through line L6021 by a method described below, based on format information fed through line L603, and outputs them. through line L6022. Predicted motion vector generator 6023 determines a predicted motion vector, based on the indication information of the predicted motion vector fed through line L604, from the predicted motion vector candidates fed through line L6022, and outputs the predicted motion vector determined across line L607. If the number of candidates to be generated in the motion reference candidate generator is limited to one, the entry of the indication information specifying the candidate to be selected may be omitted. The operation of the video decoding device 20 and a video decoding method according to one embodiment are described below. Figure 36 is a flow chart of a video decoding method according to one embodiment. In one embodiment, as shown in Figure 36, in step S621, data decoder 601 first decodes the encoded data in the compressed data around a target block as described above to restore quantized transformation coefficients, format information, and Differential motion vector of target block. In step S621, split applicability information and indication information can be restored. Further, in step S621, inverse quantizer 604 can generate inversely quantized transformation coefficients from restored quantized transformation coefficients, and inverse transformer 605 can generate a reconstructed residual signal from inversely quantized transformation coefficients. At step S622, motion predictor 602 then determines the predicted motion vector of the processing target step S621 for each step S621 in the target block that serves as a processing target. In subsequent step S623, adder 603 adds the predicted motion vector of processing target step S621 to the differential motion vector to generate a motion vector. At step S624, the predicted signal generator 607 then generates the predicted signal from the frame reference picture signals in frame memory 606 using the target block motion vector. In the subsequent step S625, adder 608 adds the predicted target block signal to the reconstructed residual signal to generate a reconstructed image signal. At step S626, the reconstructed picture signal generated at step S625 is then stored as a pre-reconstructed picture signal at frame memory 606. At the subsequent step S627, it is determined whether processing is complete for all blocks. If processing is not completed for all blocks, processing from step S621 is continued using an unprocessed block as a target block. On the other hand, when processing is completed for all blocks, processing is terminated. The operation of motion predictor 602 will be described below in detail. Figure 37 is a flowchart showing the movement predictor processing according to one embodiment. Motion predictor 602 generates the predicted motion vector according to the flow chart shown in Figure 37. In one embodiment, in step S615-1, the value of counter i is set to "0." Hereafter it is assumed that processing for the first subpartition is performed with i = 0 and processing for the second subpartition is performed with i = 1. In the next step S615-2, two candidates (left candidate and top candidate) that can be the predicted motion vector of the processing target subpartition are determined according to one of the methods described above using Figures 29, 30, 31, 32. and 33, outside the motion vectors of the partial regions previously in the processing order for the processing target subpartition. In step S615-3, it is then determined whether the NCand number of candidates generated in step S615-2 is "0." When NCand is "0" (Yes), processing proceeds to step S615-4. When NCand is not "0" (No), processing proceeds to step S615-5. At step S615-4, the predicted motion vector PMV is set to a zero vector and processing proceeds to step S615-11. On this occasion, it is also possible to adjust a motion vector of a predetermined block or a motion vector of a partial region immediately prior to the processing order for the processing target subpartition, instead of the zero vector, as the predicted motion vector PMV. In step S615-5, it is determined whether the NCand number of candidates generated step S615-2 is "1." When NCand is "1" (Yes), processing proceeds to step S615-6. When NCand is not "1" (No), processing proceeds to step S615-7. At step S615-6, the candidate generated at step S615-2 is set to PMtT. Then processing proceeds to step S615-11. In step S615-7, pmvleftflag information indicating the PMV to be selected is acquired from the candidates generated in step S615-2. Then processing proceeds to step S615-8. In step S615-8, you determine if the value of pmv lef flag is "1." When the value of pmvjeft _flag is "1" (Yes), processing proceeds to step S615-9. When the value of pmv left flag is not "1" (No), processing proceeds to step S615-10. Step S615-9 serves to adjust a motion vector of a partial region on the left side of the PMV processing target subpartition. Then processing proceeds to step S615-11. Step S615-10 serves to adjust a motion vector of a partial region on the left side of the PMV processing target subpartition. Then processing proceeds to step S615-11. Step S615-11 serves to output the PMV set in this way. Then processing proceeds to step S615-12. Next, step S615-12 serves to add "1" to the counter value i. Then processing proceeds to step S615-13. Next, step S615-13 is to determine if the value of counter i is less than "2." When counter value i is less than "2" (Yes), processing proceeds to step S615-2. On the other hand, when counter value i is not less than 2 (No), processing is terminated. By limiting the number of predicted motion vector candidates to be generated to one in step S615-2, the processes of steps S5615-5, S615-6, S615-7, S615-8, S615-9, and S615-10 can be omitted. There are no restrictions on a method for this limitation, as previously described for the predicted motion vector generator 6023, however, it is possible to use, for example, such a method as a method that uses an intermediate value of three candidates, a method that uses an average of two candidates, or a method for preliminarily determining an order of priority for selecting a predicted motion vector out of a plurality of predicted motion vector candidates. In this case, when NCand is not "0" (No) in step S615-03, processing proceeds to step S615-6. The method described above can be applied as a motion vector selection method in case the predicted signal of the processing target subpartition is generated using previously decoded motion vectors. That is, the predicted signal from the processing target subpartition can be generated using the predicted motion vector selected in step S615-2 in Figure 37. In this case, there is no need for decoding the differential motion vector and therefore the vector The predicted motion signal emitted from motion predictor 602 is not output to adder 603, but to predicted signal generator 607. In addition, data decoder 601 may be configured to decode application information that specifies whether the differential motion vector will be decoded. In this modification, motion predictor 602 may include a function for switching the output of the predicted motion vector to adder 603 or predicted signal generator 607 based on the application information. In this modification, it is unfavorable that the motion vectors of all subpartitions in the target block become identical to each other because the division of the target block becomes insignificant. In this modification, therefore, a motion vector of a subpartition included in the target block and previously situated in the processing order for the processing target subpartition can be deleted from the predicted motion vector candidates at the time of generation of the target candidates. predicted motion vector of the processing target subpartition in step S615-2 in Figure 37. For example, where the target block is divided into two subpartitions and where the motion vector of the first subpartition is restored first, the vector of movement of the first subpartitions is excluded from the predicted motion vector candidates of the second subpartition. If the motion vector of the first subpartition is equal to that of the partial region U2, the motion vector of the partial region U2 need not be used to generate the predicted motion vector of the second subpartition. In this modification, the probability of occurrence in the arithmetic decoding of the application information indicating whether the differential motion vector will be decoded can be determined in accordance with the format information. This method can be configured, for example, to set a higher probability of not encoding the differential motion vector, for the first subpartition that always comes in contact with a previously coded region, than that for the second subpartition that possibly does not. contact with any previously decoded partial region. Since the effect of this modification has already been described using Figures 34 and 35, the description thereof will be omitted herein. A video coding program for enabling a computer to operate as the video coding device 10 and a video decoding program for allowing a computer to operate as the video decoding device 20 is described below. Figure 38 is a drawing showing a configuration of the video coding program according to one embodiment. Fig. 39 is a drawing showing a configuration of the video decoding program according to one embodiment. Below, reference is made to Figure 18 showing the hardware configuration of the computer according to one embodiment and Figure 19 showing the perspective view of the computer according to one embodiment as well as Figures 38 and 39. The video coding program P10 shown in Figure 38 may be provided stored on the recording medium SM. The video decoding program P20 shown in Figure 38 may also be provided stored on the recording medium SM. Examples of SM recording media include recording media such as floppy disks, CD-ROMs, DVDs, ROMs or semiconductor memories, or the like. As described above, the C10 computer may be provided with the C12 reader device, such as a floppy disk drive, a CD-ROM drive, or a DVD drive, working memory (RAM) C14 in which an operating system Resident memory is C16 which stores programs stored on the SM recording medium, monitor device C18 such as a screen, mouse C20 and keyboard C22 as input devices, communication device C24 for transmitting and receiving data and others and CPU C26 to control program execution. When the SM recording medium is placed on the C12 reader device, the CIO computer becomes accessible to the P10 video encoding program stored on the SM recording medium via the C12 reader device and becomes capable of operating as the device. video coding 10 based on program P10. When the SM recording medium is placed on the C12 reading device, the C10 computer becomes accessible to the P20 video decoding program stored on the SM recording medium via the C12 reading device and becomes able to operate as the device. video decoding 20 based on program P20. As shown in Figure 19, the video coding program P10 and the video decoding program P20 may be those provided as a CW computer data signal overlaid on a carrier wave over a network. In this case, the CIO computer can execute program P10 or P20 after the video coding program P10 or the video decoding program P20 received by communication device C24 is stored in memory C16. As shown in Figure 38, the P10 video coding program includes the block division module M101, subpartition generator module MI02, storage module M103, motion detection module M104, predicted signal generation module M105, module M106 motion prediction module, M107 subtraction module, M108 residual signal generation module, M109 transform module, M110 quantization module, M111 inverse quantization module, M112 inverse transform module, M113 addition module and coding module entropy M114. In one embodiment, the M101 block division module, M102 subpartition generator module, M103 storage module, M104 motion detection module, M105 predicted signal generation module, M106 motion prediction module, M107 subtraction module , M108 residual signal generation module, M109 transform module, M110 quantization module, M111 inverse quantization module, M112 inverse transform module, M113 add-on module, and M114 entropy coding module make the C1® computer run same functions as block divider 501, subpartition generator 502, frame memory 503, motion detector 504, predicted signal generator 505, motion predictor 506, subtractor 507, residual signal generator 508, transformer 509, quantizer 510 , inverse quantizer 511, inverse transformer 512, adder 513 and entropy encoder 514, respectively, in the v-coding device Based on this P10 video coding program, the C10 computer becomes capable of operating as the video coding device 10. As shown in Figure 39, the P20 video decoding program includes the M201 data decoding module, M202 motion prediction module, M203 addition module, M204 inverse quantization module, M205 inverse transform module, M206 storage module , predicted signal generation module M207 and addition module M208. In one embodiment, the M201 data decoding module, M202 motion prediction module, M203 addition module, M204 inverse quantization module, M205 inverse transform module, M206 storage module, M207 predicted signal generation module and M207 module Addition M208 causes the CIO computer to perform the same functions as data decoder 601, motion predictor 602, adder 603, inverse quantizer 604, inverse transformer 605, frame memory 606, predicted signal generator 607, and adder 608, respectively, on video decoding device 20. Based on this P20 video decoding program, the C10 computer becomes capable of operating as video decoding device 20. A variety of embodiments have been described above in detail. However, it is noted that the present invention is not intended to be limited to the above embodiments. The present invention may be modified in many ways without departing from the scope and spirit of the invention. Numerical Reference List 100 predictive image coding device; 101 input terminal; 102 block divider; Predicted signal generator; 104 frame memory; 105 subtractor; 106 transformer; 107 quantizer; Inverse quantizer; 109 reverse transformer; 110 adder; 111 quantized transformation coefficient encoder; 112 output terminal; 113 prediction block division type selector; 114 motion information estimator; Prediction information memory; Prediction information encoder; 201 input terminal; 202 data analyzer; Inverse quantizer; 204 reverse transformer; 205 adder; 206 output terminal; 207 quantized transformation coefficient decoder; 208 prediction information decoder; 10 video encoding device; 20 video decoding device; 501 block divider; 502 subpartition generator; 503 frame memory; 504 motion detector; 505 predicted signal generator; 506 predictor of motion; 507 subtractor; 508 residual signal generator; 509 transformer; 510 quantizer; 511 inverse quantizer; 512 reverse transformer; 513 adder; 514 entropy encoder; 601 data decoder; 602 motion predictor; 603 adder; 604 inverse quantizer; 605 reverse transformer; 606 frame memory; 607 predicted signal generator; 608 adder; 5061 motion vector memory; 5062 motion reference candidate generator; 5063 predicted motion vector generator; 6021 motion vector memory; 6022 motion reference candidate generator; 6023 predicted motion vector generator.
权利要求:
Claims (4) [1] Predictive image coding device comprising: region partition means adapted to partition an input image in a plurality of regions; prediction information estimation adapted to subpartition a target region serving as a coding target partitioned by the region partition means into a first prediction region and a second prediction region, determining a prediction block partitioning type indicating a number and region shapes of the prediction regions suitable for the target region, predicting the first and second movement information, respectively, to acquire a signal highly correlated with the first prediction region and a signal highly correlated with the second region. prediction based on previously reconstructed signals, and obtaining prediction information, including: prediction block partitioning type, first motion information, second motion information, first merge identification information indicating whether or not to use decoding motion information associated with neighboring regions near the first prediction region to generate a predicted signal from the first prediction region, and a second merge identification information indicating whether or not to use decoded motion information except motion information from the first prediction region and except motion information matching the motion information of the first prediction region from the decoded motion information associated with neighboring regions near the second prediction region to generate a predicted signal from the second prediction region; predicted signal generation means adapted to generate a predicted signal from each of the first and second prediction regions based on the first motion information and the second motion information; residual signal generation means adapted to generate a residual signal based on the pixel signal and the predicted signal of each first and second prediction region; residual signal coding means adapted to encode the residual signal generated by the residual signal generating means. residual signal restoration means adapted to decode the encoded residual signal to generate a reconstructed residual signal; and recording means adapted to generate a restored pixel signal from the target region based on the predicted signal and the reconstructed residual signal and to store the restored pixel signal as the previously reconstructed signal. [2] Predictive image coding method performed by a predictive coding device comprising: a region partitioning step for partitioning an input image into a plurality of regions; a prediction information estimation step for subpartitioning the target region serving as a coding target partitioned by the region partition step into a first prediction region and a second prediction region, determining a type of block partitioning. prediction indicating a number and region formats of the prediction regions suitable for the target region, predicting the first and second motion information, respectively, to acquire a signal highly correlated with the first prediction region and a signal highly correlated with the second prediction region based on previously reconstructed signals, and obtaining prediction information including: the prediction block partitioning type, the first motion information, the second motion information, a first merge identification information indicating that use or not decode motion information associated with neighboring regions near the first prediction region to generate a predicted signal from the first prediction region, and a second merge identification information indicating whether or not to use decoded motion information except motion information from the first prediction region and except motion information matching the motion information of the first prediction region from the decoded motion information associated with neighboring regions near the second prediction region to generate a predicted signal from the second prediction region; a prediction information coding step for encoding the prediction information associated with the target region; a predicted signal generation step for generating a predicted signal from each first and second prediction region based on the first motion information and the second motion information; a residual signal generation step for generating a residual signal based on the pixel signal and predicted signal of each first and second prediction region; a residual signal coding step for encoding the residual signal generated by the residual signal generation steps. a residual signal restoration step for decoding the encoded residual signal to generate a reconstructed residual signal; and a recording step for generating a restored pixel signal from the target region based on the predicted signal and the reconstructed residual signal and for storing the restored pixel signal as the previously reconstructed signal. [3] 3. Predictive image decoding device comprising: data analysis means adapted to extract coded data from predictive information to indicate a prediction method for use in predicting signals from a target region serving as a target. decoding and encoding residual signal data from compressed image data obtained by partitioning an image into a plurality of regions and coded image data of the regions; prediction information decoding means adapted to restore motion information based on the prediction information encoding data, wherein prediction information decoding means restores a prediction block partitioning type indicating a number of prediction regions. prediction data obtained by subpartitioning the target region based on the coded prediction information data, where when the prediction block partition type indicates that the target region includes a first prediction region and a second prediction region, the prediction information decoding means further decodes the prediction information encoded data to restore a first merge identification information indicating whether or not it uses decoded movement information associated with neighboring regions near the first prediction region to generate a predicted signal from the prediction information. first region when the first merge identification information indicates that it does not use the decoded motion information, the prediction information decoding means further decodes the predicted information encoded data to restore a first motion information used to generating the predicted signal of the first prediction region, wherein when the first merge identification information indicates to use the decoded motion information, the prediction information decoding means further decodes the predicted information encoded data to restore a first selection information identifying the first movement information used to generate the predicted signal from the first prediction region based on decoded movement information associated with neighboring regions near the first prediction region and to restore the first prediction region information. first movement based on the first selection information, wherein the prediction information decoding means further decodes the prediction information encoded data to restore a second merge identification information indicating whether or not to use decoded movement information associated with neighboring regions near the second prediction region to generate a predicted signal from the second prediction region, where, when the second merge identification information indicates not using the decoded motion information, the prediction information decoding means further decodes the prediction information encoded data for restoring a second motion information used to generate the predicted signal from the second prediction region, and wherein when the second merge identification information indicates the use of the decoded motion information, the me The prediction information decoding node further decodes the prediction information encoded data to restore a second selection information by identifying the second motion information used to generate the predicted signal from the second prediction region based on the decoded motion information application except movement information of the first prediction region and except movement information that coincides with the movement information of the first prediction region between the decoded movement information associated with neighboring regions near the second prediction region, and restoring the second movement information with basis of the second selection information; storage means adapted for storing motion information included in restored prediction information; predicted signal generation means adapted to generate a predicted signal from each of the first and second prediction regions in the target region based on the first restored motion information and the second restored motion information; residual signal restoration means adapted to restore a reconstructed residual signal from the target region based on encoded residual signal data; and recording means adapted to generate a restored pixel signal from the target region based on the predicted signal and the reconstructed residual signal and to store the restored pixel signal as the previously reconstructed signal. [4] 4. Predictive image decoding method performed by a predictive image decoding device characterized by the fact that it comprises: a data analysis step for extracting coded data from prediction information to indicate a prediction method to be used in predicting data. a signal from a target region serving as a decoding target and encoded data from a residual signal from compressed image data obtained by dividing the image into a plurality of regions and region encoding image data; a prediction information decoding step for restoring motion information based on the predicted information encoded data; wherein predictive image decoding devices restore the prediction block partitioning type indicating a number of prediction regions obtained by subpartitioning the target region based on the encoded prediction information data. wherein, when the prediction block partitioning type indicates that the target region includes a first prediction region and a second prediction region, the predictive image decoding device still decodes the predicted information coded data to restore a prediction region. first merge identification information indicating whether or not to use for decoded motion information associated with neighboring regions near the first prediction region to generate a predicted signal from a first prediction region, where when the first merge identification information indicates not using the decoded motion information, the predictive image decoding device still decodes the predicted information encoded data to restore the first motion information used to generate a predicted signal in a first prediction region, wherein when the first information from merge identification indicates using decoded motion information, the predictive image decoding device still decodes the prediction information encoded data to restore the first selection information by identifying the first motion information used to generate the predicted signal from a first prediction region based on the decoded motion information associated with neighboring regions near the first prediction region and to restore the first motion information based on the first selection information, wherein the predictive image decoding device still decodes the prediction information coded data to restore the second merge identification information indicating whether or not to use decoded motion information associated with neighboring regions near the second prediction region to generate a predicted second signal where, when the second merge identification information indicates that it does not use the decoded motion information, the predictive image decoding device further decodes the predicted information encoded data to restore a second motion information used for generating the predicted signal from the second prediction region, and wherein, when the second merge identification information indicates using the decoded motion information, the predictive image decoding device further decodes the predicted information encoded data to restore a second selection information identifying the second motion information used to generate the predicted signal from the second prediction region based on the application for decoded motion information except motion information from the first prediction region and except motion information which matches the movement information of the first prediction region between the decoded movement information associated with neighboring regions near the second prediction region, and restore the second movement information based on the second selection information; a storage step for storing motion information included in restored prediction information; a predicted signal generation step for generating a predicted signal from each of the first and second prediction regions in the target region based on the restored first movement information and the restored second movement information; a residual signal restoration step for restoring a reconstructed residual signal from the target region based on encoded residual signal data; and a recording step for generating a restored pixel signal from the target region based on the predicted signal and the reconstructed residual signal and storing the restored pixel signal as the previously reconstructed signal.
类似技术:
公开号 | 公开日 | 专利标题 BR112013001351B1|2019-01-29|predictive image coding device, predictive image coding method, predictive image decoding device and predictive image decoding method AU2018206834B2|2019-12-19|Image prediction encoding device, image prediction encoding method, image prediction encoding program, image prediction decoding device, image prediction decoding method, and image prediction decoding program AU2015252039B2|2016-09-01|Image prediction encoding device, image prediction encoding method, image prediction encoding program, image prediction decoding device, image prediction decoding method, and image prediction decoding program
同族专利:
公开号 | 公开日 US20180014032A1|2018-01-11| CN105120280A|2015-12-02| CA3011221A1|2012-01-26| US9794592B2|2017-10-17| CA2903530C|2018-08-28| JPWO2012011432A1|2013-09-09| CA3050052A1|2012-01-26| MX367864B|2019-09-05| KR101729051B1|2017-04-21| KR20130041146A|2013-04-24| EP3661211B1|2021-08-25| EP3661210A1|2020-06-03| RU2685389C1|2019-04-17| PT3661211T|2021-09-24| RU2619202C1|2017-05-12| KR101903643B1|2018-10-02| JP6535780B2|2019-06-26| RU2013107368A|2014-10-20| TW201507445A|2015-02-16| CN105898326B|2019-01-29| JP5661114B2|2015-01-28| JP5712330B2|2015-05-07| RU2667716C1|2018-09-24| EP3661204A1|2020-06-03| RU2685393C1|2019-04-17| KR20170044764A|2017-04-25| JP2015130692A|2015-07-16| PT3070945T|2020-04-22| CA3011241A1|2012-01-26| BR112013001351A2|2016-05-17| US10542287B2|2020-01-21| CN105898327B|2019-03-01| US9185409B2|2015-11-10| CN105120279B|2018-05-29| AU2011280629B2|2015-08-06| US20180249181A1|2018-08-30| US10225580B2|2019-03-05| CN105898327A|2016-08-24| MX2019010472A|2019-10-15| US20170034531A1|2017-02-02| US20180278959A1|2018-09-27| EP2597874A4|2016-03-09| CN103004206B|2016-06-01| KR20160030322A|2016-03-16| KR101809187B1|2017-12-14| RU2658798C1|2018-06-22| KR20170137962A|2017-12-13| CN105120278B|2016-11-30| TW201215161A|2012-04-01| US20160021389A1|2016-01-21| EP3661204B8|2021-09-22| MX338236B|2016-04-08| US9986261B2|2018-05-29| DK3661204T3|2021-09-20| CA2805735A1|2012-01-26| JP2019140704A|2019-08-22| US9497480B2|2016-11-15| JP2019165499A|2019-09-26| CA3102661A1|2012-01-26| CA3011217A1|2012-01-26| CN105847837A|2016-08-10| DK3661211T3|2021-09-27| CN105120280B|2018-04-20| CA2805735C|2016-02-09| KR20170045368A|2017-04-26| US10230987B2|2019-03-12| ES2887307T3|2021-12-22| RU2685390C1|2019-04-17| CA3011221C|2019-09-03| MX350471B|2017-09-07| RU2685388C1|2019-04-17| KR20170098990A|2017-08-30| KR101772183B1|2017-08-28| DK2597874T3|2020-09-28| ES2785606T3|2020-10-07| ES2887236T3|2021-12-22| KR101695170B1|2017-01-11| TWI519140B|2016-01-21| PT2597874T|2020-09-25| WO2012011432A1|2012-01-26| CN105898326A|2016-08-24| EP3661204B1|2021-08-18| AU2011280629A1|2013-02-21| EP3661211A1|2020-06-03| DK3070945T3|2020-04-06| EP2597874A1|2013-05-29| CN105120278A|2015-12-02| EP3070945A1|2016-09-21| KR101600919B1|2016-03-21| EP2597874B1|2020-08-26| US20130136184A1|2013-05-30| PT3661204T|2021-09-22| PL3661211T3|2021-12-13| ES2820437T3|2021-04-21| CA3011217C|2021-08-03| CA3011236A1|2012-01-26| JP2016165135A|2016-09-08| TWI469646B|2015-01-11| RU2573208C2|2016-01-20| EP3664453A3|2020-07-29| US10063888B1|2018-08-28| EP3664453A2|2020-06-10| CA3011241C|2019-09-10| JP5951063B2|2016-07-13| US20180249180A1|2018-08-30| EP3070945B1|2020-03-11| CN105120279A|2015-12-02| SG187094A1|2013-03-28| US20180332307A1|2018-11-15| CN105847837B|2019-01-04| MX2013000666A|2013-02-26| KR101770662B1|2017-08-23| CA3011236C|2019-09-03| CN103004206A|2013-03-27| JP2015065690A|2015-04-09| JP2019140703A|2019-08-22| JP2018121354A|2018-08-02| CA2903530A1|2012-01-26| KR20170003736A|2017-01-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US3869672A|1972-05-13|1975-03-04|Int Standard Electric Corp|Method and arrangements for the digital control of operating functions, radio and television receivers| KR0166716B1|1992-06-18|1999-03-20|강진구|Encoding and decoding method and apparatus by using block dpcm| JPH07264598A|1994-03-23|1995-10-13|Nippon Telegr & Teleph Corp <Ntt>|Compensating method for motion, motion vector detecting circuit and motion compensating circuit| US5608458A|1994-10-13|1997-03-04|Lucent Technologies Inc.|Method and apparatus for a region-based approach to coding a sequence of video images| US5682204A|1995-12-26|1997-10-28|C Cube Microsystems, Inc.|Video encoder which uses intra-coding when an activity level of a current macro-block is smaller than a threshold level| KR100328416B1|1996-01-22|2002-05-09|모리시타 요이찌|Digital image encoding and decoding method and digital image encoding and decoding device using the same| EP1835761A3|1996-05-28|2007-10-03|Matsushita Electric Industrial Co., Ltd.|Decoding apparatus and method with intra prediction and alternative block scanning| EP0817499A3|1996-06-28|2002-05-22|Matsushita Electric Industrial Co., Ltd.|Image coding method using extrapolated pixels in insignificant areas of blocks| JP2004343788A|1996-10-31|2004-12-02|Toshiba Corp|Video encoding apparatus, video decoding apparatus and recording medium with video encoded data recorded thereon| US6687405B1|1996-11-13|2004-02-03|Koninklijke Philips Electronics N.V.|Image segmentation| CN1155259C|1996-11-26|2004-06-23|松下电器产业株式会社|Bit rate variable coding device and method, coding program recording medium| US6359929B1|1997-07-04|2002-03-19|Matsushita Electric Industrial Co., Ltd.|Image predictive decoding method, image predictive decoding apparatus, image predictive coding apparatus, and data storage medium| KR100563756B1|1997-11-05|2006-03-24|소니 가부시끼 가이샤|Method for converting digital signal and apparatus for converting digital signal| US6483521B1|1998-02-02|2002-11-19|Matsushita Electric Industrial Co., Ltd.|Image composition method, image composition apparatus, and data recording media| US7184482B2|1999-04-17|2007-02-27|Altera Corporation|Encoding system using motion vectors to represent frame-to-frame changes, wherein a decoder uses predictions of motion vectors in decoding| US6765964B1|2000-12-06|2004-07-20|Realnetworks, Inc.|System and method for intracoding video data| US7643559B2|2001-09-14|2010-01-05|Ntt Docomo, Inc.|Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program| WO2003043347A1|2001-11-16|2003-05-22|Ntt Docomo, Inc.|Image encoding method, image decoding method, image encoder, image decode, program, computer data signal, and image transmission system| JP3861698B2|2002-01-23|2006-12-20|ソニー株式会社|Image information encoding apparatus and method, image information decoding apparatus and method, and program| US7003035B2|2002-01-25|2006-02-21|Microsoft Corporation|Video coding methods and apparatuses| HUE044616T2|2002-04-19|2019-11-28|Panasonic Ip Corp America|Motion vector calculating method| US7970058B2|2002-07-15|2011-06-28|Hitachi Consumer Electronics Co., Ltd.|Moving picture encoding method and decoding method| JP3504256B1|2002-12-10|2004-03-08|株式会社エヌ・ティ・ティ・ドコモ|Video encoding method, video decoding method, video encoding device, and video decoding device| KR100560843B1|2003-04-10|2006-03-13|에스케이 텔레콤주식회사|Method and Apparatus for Determining Search Range for Adaptive Motion Vector for Use in Video Encoder| JP4373702B2|2003-05-07|2009-11-25|株式会社エヌ・ティ・ティ・ドコモ|Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program| JP2005005844A|2003-06-10|2005-01-06|Hitachi Ltd|Computation apparatus and coding processing program| MXPA05014211A|2003-06-25|2006-05-31|Thomson Licensing|Fast mode-decision encoding for interframes.| US7646815B2|2003-07-15|2010-01-12|Lsi Corporation|Intra estimation chroma mode 0 sub-block dependent prediction| US20050013498A1|2003-07-18|2005-01-20|Microsoft Corporation|Coding of motion vector information| US8085846B2|2004-08-24|2011-12-27|Thomson Licensing|Method and apparatus for decoding hybrid intra-inter coded blocks| US8064520B2|2003-09-07|2011-11-22|Microsoft Corporation|Advanced bi-directional predictive coding of interlaced video| US7400681B2|2003-11-28|2008-07-15|Scientific-Atlanta, Inc.|Low-complexity motion vector prediction for video codec with two lists of reference pictures| JP4213646B2|2003-12-26|2009-01-21|株式会社エヌ・ティ・ティ・ドコモ|Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program.| JP4313710B2|2004-03-25|2009-08-12|パナソニック株式会社|Image encoding method and image decoding method| JP4414904B2|2004-04-16|2010-02-17|株式会社エヌ・ティ・ティ・ドコモ|Moving picture encoding apparatus, moving picture encoding method, moving picture encoding program, moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program| CN101023672A|2004-07-12|2007-08-22|诺基亚公司|System and method for motion prediction in scalable video coding| JP4699460B2|2004-07-20|2011-06-08|クゥアルコム・インコーポレイテッド|Method and apparatus for motion vector prediction in temporal video compression| CN100471280C|2005-01-07|2009-03-18|株式会社Ntt都科摩|Motion image encoding apparatus, motion image decoding apparatus, motion image encoding method, motion image decoding method, motion image encoding program, and motion image decoding program| KR101108681B1|2005-01-19|2012-01-25|삼성전자주식회사|Frequency transform coefficient prediction method and apparatus in video codec, and video encoder and decoder therewith| US7580456B2|2005-03-01|2009-08-25|Microsoft Corporation|Prediction-based directional fractional pixel motion estimation for video coding| US20070025444A1|2005-07-28|2007-02-01|Shigeyuki Okada|Coding Method| US8446954B2|2005-09-27|2013-05-21|Qualcomm Incorporated|Mode selection techniques for multimedia coding| WO2007063808A1|2005-11-30|2007-06-07|Kabushiki Kaisha Toshiba|Image encoding/image decoding method and image encoding/image decoding apparatus| TWI364992B|2006-01-05|2012-05-21|Nippon Telegraph & Telephone| WO2007077116A1|2006-01-05|2007-07-12|Thomson Licensing|Inter-layer motion prediction method| CN102611892B|2006-03-16|2014-10-08|华为技术有限公司|Method and device for realizing adaptive quantization in coding process| JP5002286B2|2006-04-27|2012-08-15|キヤノン株式会社|Image encoding apparatus, image encoding method, program, and storage medium| JPWO2007136088A1|2006-05-24|2009-10-01|パナソニック株式会社|Image encoding apparatus, image encoding method, and integrated circuit for image encoding| US20080026729A1|2006-07-31|2008-01-31|Research In Motion Limited|Method and apparatus for configuring unique profile settings for multiple services| KR101526914B1|2006-08-02|2015-06-08|톰슨 라이센싱|Methods and apparatus for adaptive geometric partitioning for video decoding| CN101507280B|2006-08-25|2012-12-26|汤姆逊许可公司|Methods and apparatus for reduced resolution partitioning| EP2079242A4|2006-10-30|2010-11-03|Nippon Telegraph & Telephone|Predictive reference information generation method, dynamic image encoding and decoding method, device thereof, program thereof, and storage medium containing the program| KR101383540B1|2007-01-03|2014-04-09|삼성전자주식회사|Method of estimating motion vector using multiple motion vector predictors, apparatus, encoder, decoder and decoding method| CN102547277B|2007-01-18|2014-12-03|弗劳恩霍夫应用研究促进协会|Device for generating quality scalable video data stream and method thereof| US20080240242A1|2007-03-27|2008-10-02|Nokia Corporation|Method and system for motion vector predictions| EP2154901A4|2007-04-09|2011-06-22|Ntt Docomo Inc|Image prediction/encoding device, image prediction/encoding method, image prediction/encoding program, image prediction/decoding device, image prediction/decoding method, and image prediction decoding program| BRPI0809512A2|2007-04-12|2016-03-15|Thomson Licensing|context-dependent merge method and apparatus for direct jump modes for video encoding and decoding| JP4788649B2|2007-04-27|2011-10-05|株式会社日立製作所|Moving image recording method and apparatus| JP5188875B2|2007-06-04|2013-04-24|株式会社エヌ・ティ・ティ・ドコモ|Image predictive encoding device, image predictive decoding device, image predictive encoding method, image predictive decoding method, image predictive encoding program, and image predictive decoding program| JP2008311781A|2007-06-12|2008-12-25|Ntt Docomo Inc|Motion picture encoder, motion picture decoder, motion picture encoding method, motion picture decoding method, motion picture encoding program and motion picture decoding program| JP4947364B2|2007-06-22|2012-06-06|ソニー株式会社|Information processing system and method, information processing apparatus and method, and program| KR100901874B1|2007-07-11|2009-06-09|한국전자통신연구원|Inter mode decision Method for video encoding| JP4678015B2|2007-07-13|2011-04-27|富士通株式会社|Moving picture coding apparatus and moving picture coding method| KR101408698B1|2007-07-31|2014-06-18|삼성전자주식회사|Method and apparatus for encoding/decoding image using weighted prediction| JP2009111691A|2007-10-30|2009-05-21|Hitachi Ltd|Image-encoding device and encoding method, and image-decoding device and decoding method| US20090168871A1|2007-12-31|2009-07-02|Ning Lu|Video motion estimation| JP4990927B2|2008-03-28|2012-08-01|三星電子株式会社|Method and apparatus for encoding / decoding motion vector information| JP5406465B2|2008-04-24|2014-02-05|株式会社Nttドコモ|Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program| US8548041B2|2008-09-25|2013-10-01|Mediatek Inc.|Adaptive filter| US8831103B2|2008-10-02|2014-09-09|Sony Corporation|Image processing apparatus and method| EP2348734A1|2008-11-07|2011-07-27|Mitsubishi Electric Corporation|Image encoding device and image decoding device| JP5277257B2|2008-12-03|2013-08-28|株式会社日立製作所|Video decoding method and video encoding method| KR101952726B1|2009-03-23|2019-02-27|가부시키가이샤 엔.티.티.도코모|Image predictive encoding device, image predictive encoding method, image predictive decoding device, and image predictive decoding method| US9626769B2|2009-09-04|2017-04-18|Stmicroelectronics International N.V.|Digital video encoder system, method, and non-transitory computer-readable medium for tracking object regions| WO2011129067A1|2010-04-13|2011-10-20|パナソニック株式会社|Motion compensation method, image decoding method, image encoding method, motion compensation device, program, and integrated circuit| KR101484281B1|2010-07-09|2015-01-21|삼성전자주식회사|Method and apparatus for video encoding using block merging, method and apparatus for video decoding using block merging| DK2858366T3|2010-07-09|2017-02-13|Samsung Electronics Co Ltd|Method of decoding video using block merge| KR101729051B1|2010-07-20|2017-04-21|가부시키가이샤 엔.티.티.도코모|Image prediction decoding device and image prediction decoding method| CN106210737B|2010-10-06|2019-05-21|株式会社Ntt都科摩|Image prediction/decoding device, image prediction decoding method| KR20120140181A|2011-06-20|2012-12-28|한국전자통신연구원|Method and apparatus for encoding and decoding using filtering for prediction block boundary| GB2556695B|2011-09-23|2018-11-14|Kt Corp|Method for inducing a merge candidate block and device using the same|KR101952726B1|2009-03-23|2019-02-27|가부시키가이샤 엔.티.티.도코모|Image predictive encoding device, image predictive encoding method, image predictive decoding device, and image predictive decoding method| KR101729051B1|2010-07-20|2017-04-21|가부시키가이샤 엔.티.티.도코모|Image prediction decoding device and image prediction decoding method| JP5698541B2|2011-01-12|2015-04-08|株式会社Nttドコモ|Image predictive encoding device, image predictive encoding method, image predictive encoding program, image predictive decoding device, image predictive decoding method, and image predictive decoding program| JP5857244B2|2011-03-07|2016-02-10|パナソニックIpマネジメント株式会社|Motion compensation device, video encoding device, video decoding device, motion compensation method, program, and integrated circuit| JP2013207402A|2012-03-27|2013-10-07|Nippon Hoso Kyokai <Nhk>|Image encoding device and program| TWI613906B|2012-04-06|2018-02-01|Jvc Kenwood Corp|Video decoding device, image decoding method, and recording medium storing image decoding program| TWI597976B|2012-04-16|2017-09-01|Jvc Kenwood Corp|Motion picture encoding apparatus, motion picture encoding method, and recording medium for moving picture encoding program| GB2501535A|2012-04-26|2013-10-30|Sony Corp|Chrominance Processing in High Efficiency Video Codecs| JP5972687B2|2012-07-02|2016-08-17|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive coding program, moving picture predictive decoding apparatus, moving picture predictive decoding method, and moving picture predictive decoding program| JP5798539B2|2012-09-24|2015-10-21|株式会社Nttドコモ|Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method| JP5719401B2|2013-04-02|2015-05-20|日本電信電話株式会社|Block size determination method, video encoding device, and program| CN103702127B|2013-12-30|2016-10-12|清华大学|Motion estimation search range Forecasting Methodology based on motion vector dependency and system| WO2015100729A1|2014-01-03|2015-07-09|Mediatek Singapore Pte. Ltd.|Improved merging candidate list construction in 3dvc| CN103841405B|2014-03-21|2016-07-06|华为技术有限公司|The decoding method of depth image and coding and decoding device| WO2015196333A1|2014-06-23|2015-12-30|Mediatek Singapore Pte. Ltd.|Segmental prediction for video coding| WO2015196966A1|2014-06-23|2015-12-30|Mediatek Singapore Pte. Ltd.|Method of segmental prediction for depth and texture data in 3d and multi-view coding systems| WO2016044979A1|2014-09-22|2016-03-31|Mediatek Singapore Pte. Ltd.|Segmental prediction for video coding| WO2016200115A1|2015-06-07|2016-12-15|엘지전자|Method and device for performing deblocking filtering| US20180199058A1|2015-09-10|2018-07-12|Samsung Electronics Co., Ltd.|Video encoding and decoding method and device| US10499070B2|2015-09-11|2019-12-03|Facebook, Inc.|Key frame placement for distributed video encoding| US10063872B2|2015-09-11|2018-08-28|Facebook, Inc.|Segment based encoding of video| US10506235B2|2015-09-11|2019-12-10|Facebook, Inc.|Distributed control of video encoding speeds| US10602153B2|2015-09-11|2020-03-24|Facebook, Inc.|Ultra-high video compression| US10375156B2|2015-09-11|2019-08-06|Facebook, Inc.|Using worker nodes in a distributed video encoding system| US10602157B2|2015-09-11|2020-03-24|Facebook, Inc.|Variable bitrate control for distributed video encoding| US10341561B2|2015-09-11|2019-07-02|Facebook, Inc.|Distributed image stabilization| FR3047379A1|2016-01-29|2017-08-04|Orange|METHOD FOR ENCODING AND DECODING DATA, DEVICE FOR ENCODING AND DECODING DATA AND CORRESPONDING COMPUTER PROGRAMS| CN109565595B|2016-06-24|2021-06-22|华为技术有限公司|Video coding device and method using partition-based video coding block partitioning| US10957068B2|2017-01-06|2021-03-23|Canon Kabushiki Kaisha|Information processing apparatus and method of controlling the same| JP6894707B2|2017-01-06|2021-06-30|キヤノン株式会社|Information processing device and its control method, program| US10218448B2|2017-02-02|2019-02-26|Osram Sylvania Inc.|System and method for determining vehicle position based upon light-based communication and time-of-flight measurements| US10531085B2|2017-05-09|2020-01-07|Futurewei Technologies, Inc.|Coding chroma samples in video compression| CN107659804B|2017-10-30|2019-03-12|河海大学|A kind of screen content video coding algorithm for virtual reality head-mounted display apparatus| US11025904B2|2018-06-08|2021-06-01|Tencent America LLC|Method and apparatus for temporal motion vector prediction| WO2020003260A1|2018-06-29|2020-01-02|Beijing Bytedance Network Technology Co., Ltd.|Boundary enhancement for sub-block| TWI731361B|2018-07-01|2021-06-21|大陸商北京字節跳動網絡技術有限公司|Shape dependent intra coding| US20210168398A1|2018-07-02|2021-06-03|Intellectual Discovery Co., Ltd.|Video coding method and device using merge candidate| US10404980B1|2018-07-10|2019-09-03|Tencent America LLC|Intra prediction with wide angle mode in video coding| CN109547781B|2018-10-26|2020-12-22|嘉兴奥恒进出口有限公司|Compression method and device based on image prediction| CN111263144A|2018-11-30|2020-06-09|杭州海康威视数字技术股份有限公司|Motion information determination method and device| US11102513B2|2018-12-06|2021-08-24|Tencent America LLC|One-level transform split and adaptive sub-block transform| US20200260114A1|2019-02-08|2020-08-13|Tencent America LLC|Method and apparatus for video coding| CN113475069A|2019-03-12|2021-10-01|北京达佳互联信息技术有限公司|Method and apparatus for video coding and decoding for triangle prediction|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 19/61 (2014.01), H04N 19/119 (2014.01), H04N | 2019-01-08| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2019-01-29| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 14/07/2011, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2010-163245|2010-07-20| JP2010163245|2010-07-20| JP2010174869|2010-08-03| JP2010-174869|2010-08-03| PCT/JP2011/066120|WO2012011432A1|2010-07-20|2011-07-14|Image prediction encoding device, image prediction encoding method, image prediction encoding program, image prediction decoding device, image prediction decoding method, and image prediction decoding program| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|